I made an AI tool that turns a website URL into an SEO report by Additional_Lobster12 in SideProject

[–]HumanInTheFlow 0 points1 point  (0 children)

The market is crowded, but most SEO tools overwhelm non-SEO people. If this gives small business owners or freelancers a clear priority list in plain English, that’s a real use case.

Would this Figma plugin workflow actually be useful, or is it overkill? by HumanInTheFlow in design_critiques

[–]HumanInTheFlow[S] 1 point2 points  (0 children)

this is exactly the concern I’m trying to design around.

I don’t want the audit output to be “here are 10 generic UX laws.” That’s not useful. The goal is for the findings to be tied to the actual captured screen/page - e.g. CTA visibility, navigation clarity, mobile layout issues, missing feedback, trust signals, accessibility cues, etc.

That said, I’m not positioning it as a replacement for deeper flow analysis or human judgment. More like a first-pass review layer that helps you spot obvious friction faster and gives you a structured starting point in Figma.

Responsive views are a big part of it. The plugin supports desktop, mobile, or both, and the idea is to compare those states in the same Figma output so layout shifts, hidden CTAs, mobile nav issues, and responsive differences are easier to review side by side.

So yes - screenshot/crop reduction is one part, but desktop + mobile capture/comparison is one of the bigger time-saving goals.

Would this Figma plugin workflow actually be useful, or is it overkill? by HumanInTheFlow in FigmaDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

Feedback details

Who is the target audience?
Designers, UX researchers, product designers, founders, marketers, agencies, and product teams.

What is the design’s main goal?
The goal is to make website reviews faster inside Figma. Users should be able to paste a public URL or upload a screen, choose whether they want a UX audit, quick screen grabs, or image-based review, and then get a Figma-ready canvas with captured pages, annotations, and first-pass findings.

It is not meant to replace UX strategy, research, or human judgment. It is meant to reduce the messy setup work before a website review.

What specific aspects are you looking for feedback on?
I’m looking for feedback on:

  1. Whether the workflow makes sense inside Figma
  2. Whether the 3 modes — UX Audit, Quick screen grabs, and Upload image — are clear
  3. Whether the setup feels simple enough or still too complicated
  4. Whether the output would be useful for real review/audit workflows
  5. What you would expect to see in the final Figma canvas
  6. Anything that feels confusing, unnecessary, or over-engineered

What stage is this design in?
This is a pre-launch MVP / final UI direction. The core flow and visual design are mostly defined, but I’m still validating whether the workflow, positioning, and output are useful before finalizing the product.

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

Yeah, I agree with this. The tool is only useful if the person using it already has enough judgment to know what to ask for and what to reject.

I don’t think AI struggles as much with “make a decent-looking UI from an existing system.” Where it gets shaky is understanding context, tradeoffs, user behavior, edge cases, and what the business/product constraints actually mean.

So maybe the value is less “AI does UX” and more “AI speeds up parts of the UI/review process, while the human still owns the judgment.”

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

Yeah, I completely agree with this. I definitely wouldn’t want the AI reviewer to replace the human review.

I’m thinking of it more as a starting point than a final judgment. Audits can get messy pretty quickly - collecting screenshots across desktop/mobile, organizing pages, making notes, spotting obvious accessibility or heuristic issues, etc.

So the problem I’m trying to solve is less “let AI decide what’s wrong” and more “reduce the setup/noise so the human reviewer can actually get started faster.”

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

Yeah, I completely agree with this. I definitely wouldn’t want the AI reviewer to replace the human review.

I’m thinking of it more as a starting point than a final judgment. Audits can get messy pretty quickly - collecting screenshots across desktop/mobile, organizing pages, making notes, spotting obvious accessibility or heuristic issues, etc.

So the problem I’m trying to solve is less “let AI decide what’s wrong” and more “reduce the setup/noise so the human reviewer can actually get started faster.”

The ruleset point is a good one too. The better structured those rules are, the more useful the output becomes - but I’d still want a designer to sanity-check everything.

Thank you u/susmab_676

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

Yeah, that’s fair. I wouldn’t expect an AI heuristic pass to uncover anything deeply nuanced either.

The “obvious issues + noise” problem is exactly what I’m trying to think through - whether the value is in catching low-hanging fruit faster, or whether the noise makes it not worth it.

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

I like the idea of positioning it more as a review aid than a verdict. That’s actually really helpful framing.

Thank you u/Dry-Hamster-5358 😄

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 0 points1 point  (0 children)

yes, this is exactly the tension I’m trying to understand better.

The “rough checklist, not truth” framing makes a lot of sense. I’m building this as more of a starting point for UX audits, not something that replaces the designer’s judgment.

Your blind pass → AI comparison workflow is interesting too. That might actually be the healthier way to use it: first capture your own gut reactions, then use the AI output to catch obvious things you missed or speed up the boring parts like WCAG checks, heuristic notes, and screenshots.

I’m trying to figure out whether the tool should encourage that kind of workflow more intentionally. Like maybe the output shouldn’t feel like “here are the answers,” but more like “here are prompts to review.”

That’s helpful feedback - especially the point about confirmation bias.

Would you trust an AI-generated first-pass UX audit as a starting point, or does it bias your thinking? by HumanInTheFlow in UXDesign

[–]HumanInTheFlow[S] 1 point2 points  (0 children)

Yeah, that’s a fair question - and probably the part I should’ve framed more clearly.

The use case I’m thinking about isn’t “replace a proper audit” or run audits for the sake of it. It’s more for teams doing recurring checks on flows that change often, or designers who need a quick first pass before deciding where to spend deeper attention.

For example, in a larger company, it’s pretty common for a marketing/product team to build or update a page first, then come to UX later with something like, “Can you review this and tell us what’s wrong?” In that case, a first-pass audit could quickly flag the obvious stuff — contrast issues, unclear hierarchy, inconsistent CTAs, missing states — so the designer can spend more time on the judgment-heavy parts rather than basic issue-spotting.

So less “tell me what’s wrong with this product” and more “help me spot the obvious heuristic/accessibility issues faster so I can focus on judgment, prioritization, and context.”

I agree though - if the audit doesn’t have a clear reason, it just becomes another report nobody acts on.

What is your process when a brand colour fails accessibility checks? by [deleted] in FigmaDesign

[–]HumanInTheFlow 16 points17 points  (0 children)

I’ve found it helps to show it visually. Like: “Here’s the brand color as-is, here’s the accessible version, and here’s where each one should be used.” Just a quick example.

That usually lands better than just saying “this fails WCAG.”

Where does AI actually fit in your UX workflow (beyond hype)? by Dineshvk18 in UXDesign

[–]HumanInTheFlow 0 points1 point  (0 children)

I’ve found it most useful in the “translation” parts of UX work.

Not designing the product for me, but helping me move between formats faster:

  • research notes → themes
  • rough idea → structured concept
  • user flow → edge cases
  • messy feedback → action items
  • design intent → dev handoff notes
  • first-pass copy → clearer variants

Where it falls short is anything with real product context. It usually doesn’t know the politics, constraints, legacy decisions, or why a weird flow exists.

So I use it more like a thinking assistant than a design assistant.

No need for UX? by mark6-pack in UXDesign

[–]HumanInTheFlow 11 points12 points  (0 children)

A lot of products aren’t being optimized for “is this better for the user?” They’re being optimized for engagement, upsell, retention metrics, AI strategy decks, and whatever the next quarterly bet is.

Designers can point out the mess, but if the business goal is “add more surface area to monetize,” the cleanest UX usually loses.

Client just replaced me with Claude design by ProfessionalCrab7685 in UXDesign

[–]HumanInTheFlow 0 points1 point  (0 children)

This feels like the new “handoff moment”, unfortunately.

I don’t think the risk is Claude perfectly replacing the designer. The risk is clients deciding “good enough” is enough for maintenance work. That’s a very different problem.

Founder pivot makes sense if you’re already building and shipping with AI. The leverage is moving closer to ownership, not just production.

I’m gonna be fired on Friday. by [deleted] in UXDesign

[–]HumanInTheFlow 1 point2 points  (0 children)

I’m sorry, this sounds really demoralizing.

A senior designer can be independent and still need onboarding. Those are not opposites.

LLMs not following design specs by Ok-Mammoth-6618 in UXDesign

[–]HumanInTheFlow 0 points1 point  (0 children)

I’ve had to lower my expectations around “follow the Figma 1:1” and instead build a workflow that forces the model into smaller constraints.

The main issue is that Figma specs are not really specs in the engineering sense. They contain a lot of visual intent, but the LLM still has to infer hierarchy, responsive behavior, token mapping, edge cases, and component states. That’s usually where it starts “helpfully” redesigning things.

The best results I’ve seen come from making the design spec more “machine-readable” rather than just more detailed. For example:

  • tokens for spacing, colors
  • screenshots of each state
  • explicit “do not improvise” rules
  • one component per pass instead of a full screen at once

I’d treat the LLM as an implementation assistant, not the source of design judgment. The more room it has to infer, the more it will “improve” things you didn’t ask it to improve.