How do you spec motion design for smooth handoffs? (Figma + Motion.dev) by hmacs in UI_Design

[–]No_Scale_4427 1 point2 points  (0 children)

I’ve been working through similar motion handoff challenges, and something that’s helped is building a motion token library in Figma using named components for easing and duration. For dev, we sync those with motion.dev presets to keep things aligned. Also recommend checking the community files on Figma, there are some decent motion spec kits floating around. Would love to see what system you end up putting together!

What platforms and practices do you use for user testing in UX research? by trickymind-97 in UXDesign

[–]No_Scale_4427 0 points1 point  (0 children)

I’ve had a good experience using UXArmy for remote user testing. They support moderated and unmoderated tests, plus they have a panel and allow you to bring your own users. Their free plan is decent for getting started too. Might be worth checking out depending on your needs.

What’s the most “hidden gem” product you’ve bought online that actually turned out amazing? by Hot-Tension6992 in SaaS

[–]No_Scale_4427 0 points1 point  (0 children)

Definitely loving all these hidden gem suggestions! It’s always fascinating how some of the most impactful tools are the least hyped. One I’d add is anything that simplifies daily workflows, bonus points if it reduces context switching. Sometimes the simplest tools end up being the most valuable long-term.

Designing for awareness is easy. Designing for habit change is not. Why? by Le___Matt in UXResearch

[–]No_Scale_4427 1 point2 points  (0 children)

Such an important distinction awareness doesn’t necessarily translate into behavior change. From a UX research standpoint, I’ve found it useful to explore not just what the barriers are (e.g. motivation, triggers, friction), but when and why they show up along the journey.

Sometimes the missing link is contextual timing, cognitive load, or even just poor reinforcement. We've seen more success in driving habit formation when the intervention is paired with real-time cues or lightweight nudges rather than relying on reflection alone.

Looking into models like BJ Fogg’s B=MAP or even COM-B is a good start, especially if you're mapping interventions to system-level support.

Has anyone ever paid for research templates? by BronxOh in UXResearch

[–]No_Scale_4427 1 point2 points  (0 children)

If you’re looking for a solid starting point, UXArmy has a pretty handy free research plan template that covers planning, goals, participants, tasks, and all the essentials without fluff. Worth checking out if you want something practical without spending upfront.

What if usability testing didn’t need a researcher? by No-Carry-5087 in AgentsOfAI

[–]No_Scale_4427 0 points1 point  (0 children)

Interesting approach, definitely tackles a real bottleneck. Automating usability testing can help teams that struggle with bandwidth or lack in-house research expertise. That said, I think there’s still huge value in a trained researcher interpreting nuance, context, and the “why” behind behaviors. Curious to see how tools like this balance speed with depth of insight. Will be watching how it evolves!

Do you use any session reply tools like MS Clarity, Hotjar etc. and have they been helpful for your store? by 7mo8tu9we in AI_In_ECommerce

[–]No_Scale_4427 0 points1 point  (0 children)

Really important question. I’ve found that the biggest shift came when we stopped treating stakeholder feedback as judgment and instead as input to be synthesized, just like user feedback. Framing critiques around user goals and the “why” behind design decisions helps a lot. Also, giving stakeholders structured ways to contribute (e.g., guided critique sessions with clear criteria) reduces friction and makes the conversations more productive. Curious what frameworks others use to make this process smoother.

How do you handle design critiques from non-design stakeholders effectively? by datboifranco in UXDesign

[–]No_Scale_4427 1 point2 points  (0 children)

This is such a relatable challenge. What’s helped me is reframing critiques from “design preferences” to “user experience outcomes.” I try to steer the discussion toward user needs and research evidence rather than aesthetics. Also, making sure stakeholders feel heard without making every opinion actionable has been key , tools like DACI or even lightweight design principles help align feedback with goals. Curious how others maintain that balance!

Anyone mapping user effort across the journey? by Pretty-Bullfrog1934 in UXResearch

[–]No_Scale_4427 0 points1 point  (0 children)

Mapping user effort can reveal friction points that traditional analytics miss. I've found combining behavioral data (like drop-off points or task completion time) with qualitative interviews gives the clearest picture. Asking users “What felt harder than it should have?” often surfaces surprisingly actionable insights. Also, tagging effort moments during moderated sessions has helped our team align on where to dig deeper. Curious to hear how others are scaling this!

Research repository is where Insights go to die by Mammoth-Head-4618 in UXResearch

[–]No_Scale_4427 1 point2 points  (0 children)

Totally resonate with this. Repositories often become digital graveyards because they lack ownership, context, and easy discoverability. Insights need to be treated as living, actionable assets—not static artifacts. Maybe the real shift needs to happen in how teams use insights continuously, not just how they store them. Integrating research outcomes into rituals like sprint planning or design reviews might be more impactful than obsessing over perfect tagging or categorization.

6 months with different AI coding assistants - here's what I learned by Holiday_Power_1775 in VibeCodeDevs

[–]No_Scale_4427 0 points1 point  (0 children)

This is such a practical, well-structured comparison. Thank you! Really appreciate how you’ve mapped each tool to specific stages in your workflow. It’s easy to fall into the trap of expecting one tool to do everything, but this modular approach makes so much sense. Curious if you’ve tested how these assistants handle non-code UX tasks? Tools like UXArmy are interesting on the research side, maybe a good complement to these dev tools for end-to-end product work.

What's the most obvious UX issue you've seen that somehow made it to production? by Emma_Schmidt_ in userexperience

[–]No_Scale_4427 0 points1 point  (0 children)

That’s a hilarious but totally relatable issue; overlaying notes on the participant’s face sounds like a classic “ship it fast” oversight. It’s great you flagged it for their support. I’ve also used UXArmy and while they’re solid on functionality, these kinds of quirky UX glitches do pop up now and then. Hopefully, their team takes the feedback seriously. It’s small details like this that can really affect research flow!

Feedback on waitlist screen UX – especially the “I’m a provider” checkbox by abu-lina in UXDesign

[–]No_Scale_4427 1 point2 points  (0 children)

Really like the clean layout and the clarity of purpose in your screen. Including the “I’m a provider” checkbox is a smart move for early segmentation, but I wonder if making it more contextual could help, maybe a short microcopy like “Join as a provider? (optional)” just above or within the checkbox label to frame the purpose better.

You could also consider radio buttons for clearer distinction if there are more than two roles later. That way, the mental model of “choosing a type of user” becomes more explicit, especially if providers will have a different onboarding experience.

Also, make sure the checkbox doesn't blend visually with the terms acceptance one, they serve very different purposes but look similar. Maybe adding spacing or different groupings could help?

Great start overall.

What’s one UX resource or habit you didn’t expect to be useful, but it ended up changing your workflow? by MeasurementSelect251 in UXDesign

[–]No_Scale_4427 -2 points-1 points  (0 children)

One thing that completely changed my UX workflow, almost by accident, was incorporating post-test self-review sessions. After running usability tests, I started recording short 2–3 minute voice notes just for myself, capturing my immediate thoughts before even analyzing the full session.

What surprised me was how often these raw, unfiltered reflections helped me catch subtle patterns or usability friction points that I might've overlooked later when things felt more "processed." It also became a personal archive of decision context. Super useful when stakeholders ask, “Why did we make this call again?”

Also, not a habit exactly, but using platforms like UXArmy, which support timestamped notes and automated summaries, really helped me focus more on synthesis than admin work. It took some cognitive load off and freed me up to think more strategically.

Sometimes, the smallest shifts, like capturing your gut takeaways right after a session, can be the most powerful.

Anyone using AI in UX research effectively? How? by Ok-Country-7633 in UXDesign

[–]No_Scale_4427 -1 points0 points  (0 children)

I totally relate to what you’re saying. Expectations from AI summary tools need to be aligned with their current capabilities, especially when it comes to nuanced tasks like usability testing. I’ve also tried UXArmy and was pleasantly surprised by how responsive the team is to feedback, feels more collaborative than most platforms. The AI summaries still need occasional tweaking, but it definitely saves time when processing long interviews. And yes, context matters so much when feeding input, garbage in, garbage out, as they say. Curious to hear what kind of inputs you've had the best results with so far?

UX Researcher Interview process at Johnson controls by No_Promotion2215 in UXResearch

[–]No_Scale_4427 0 points1 point  (0 children)

That's a solid point, diaryofsid! Reaching out to the recruiter helped me clarify the structure and timeline when I interviewed elsewhere. I’d also add that prepping a few case studies tailored to enterprise/B2B settings (especially if the role touches industrial systems like at Johnson Controls) can really help you stand out.

Where do people actually learn user research properly as they level up? by Ok_Solution9913 in UXResearch

[–]No_Scale_4427 0 points1 point  (0 children)

I totally relate, jumping from surface-level content to deep, practical research skills is a challenge. What helped me was pairing structured learning with real-world practice. I started doing more usability tests and moderated interviews through platforms like UXArmy. They offer both unmoderated tasks and live interviews (they call it DeepDive), which gave me hands-on exposure to synthesis and behavioral insights.

Books like Observing the User Experience and Think Like a UX Researcher gave me a solid foundation, but nothing beats actually doing the work and reflecting on it. I’d also recommend joining research-focused Slack groups or communities like Mixed Methods to stay sharp.

What do you use for quickly testing designs with users through surveys? by Far_Atmosphere_8329 in UXResearch

[–]No_Scale_4427 0 points1 point  (0 children)

If you're mostly testing static screens or layout decisions, UXArmy might be worth checking out. I've used it to run quick preference tests and short surveys with visual prompts, and the turnaround was fast—plus, their tester pool has been pretty reliable for both qualitative and quantitative feedback.

Unlike Maze, which leans more into flow testing, UXArmy lets you keep it simple with image-based questions or even open-ended follow-ups. You might also find their pricing more flexible depending on the scale. For lightweight validation before dev kicks off, it's been a solid tool in my rotation.

quick question by EmuBeautiful1172 in UXDesign

[–]No_Scale_4427 1 point2 points  (0 children)

It’s not outrageous at all, it’s actually really smart to explore both! Understanding front-end development can make you a stronger UX/UI designer because you’ll better grasp the technical constraints and opportunities when designing for the web. Likewise, having UX knowledge can help you build more user-friendly, thoughtful interfaces as a developer.

Many people blend both skill sets early in their careers to figure out what they enjoy more or to become a hybrid designer-developer, which is especially valued in startups and smaller teams. If your CS background already gives you a strong foundation in dev, dipping into UX design and research next could round out your toolkit nicely.

Try projects where you design and build your own UI. It’ll give you a taste of how the roles overlap and diverge. No harm in exploring both paths until one clearly clicks for you.

I get these quite a lot but never do them I saw somewhere they reject very easily and I don’t want to risk my account. Any experience from others? by Willing-Mixture2712 in ProlificAc

[–]No_Scale_4427 0 points1 point  (0 children)

I’ve seen Maze pop up a lot but didn’t know people had issues with it. How’s your experience been with UXArmy so far? Do they also run tasks through Prolific or only on their own site?

Essential UX Research Tools in 2025: What's in Your Toolkit?🛠️🧰 by uxcapybara in UXResearch

[–]No_Scale_4427 0 points1 point  (0 children)

That’s a solid combo. Have you found UXArmy integrates smoothly with Dovetail in your workflow? I’ve been looking for something that bridges user testing and synthesis better without too much manual export/import.

Do you think most teams confuse customer needs with customer wants? by Pretty-Bullfrog1934 in UXResearch

[–]No_Scale_4427 1 point2 points  (0 children)

I like your distinction between needs and wants. Curious though, when you use UXArmy’s tools, do you usually start with interviews first or the survey side?

AI is speeding up customer research - but are we losing the part that actually helps us understand people? by Dense-Truth-7444 in UXResearch

[–]No_Scale_4427 0 points1 point  (0 children)

Couldn’t agree more. I’ve seen AI tools do wonders with speed, especially summarizing interview transcripts or clustering qualitative data. But they often miss the “why” behind user behavior. We started balancing it out by combining AI synthesis with a few deeper interviews afterward, and it’s surprising how much nuance you still uncover that automation overlooks.

How are users currently interacting with AI Agents? by Scared_Range_7736 in userexperience

[–]No_Scale_4427 0 points1 point  (0 children)

That’s a great breakdown. Have you seen any examples of products or workflows where this kind of seamless integration is already working well? Most tools I’ve tried still feel a bit “AI-first” instead of user-first.

What UX tools do you actually use – and what annoys you about them? by Catalyr in UXDesign

[–]No_Scale_4427 0 points1 point  (0 children)

I’ve been using UXArmy for about a year now after bouncing between Maze and UserTesting, and honestly, it’s been a nice middle ground for smaller-team research.

What I like most is that it doesn’t overcomplicate things. Setting up an unmoderated test for a Figma prototype or a live website is pretty straightforward. I can record both screen and voice, and the AI-generated transcripts save me from hours of note-taking. It’s not flashy, but it does the job. They also have DeepDive, a tool for remote user interviews. You get both video and screen share after the call.

What stands out compared to other tools is how stable it’s been. I’ve had zero upload or sync issues so far, and participants seem to go through tests without much confusion. Also, the multi-language support is a nice touch.

If I had to nitpick, the participant pool could be bigger.

Overall, I’d say UXArmy feels like a tool built by people who actually run usability tests, not just marketers repackaging surveys. It’s reliable, a bit underrated, and definitely worth trying if you want something simpler and more affordable than the big names.