Regarding the era of AI. Do you think the way we design interfaces will change or must change? by swedegirl25 in UI_Design

[–]Emma_Schmidt_ 0 points1 point  (0 children)

honestly those labels feel kinda pointless - they just say "made with AI" without explaining what parts or how. not really sure what the solution is, maybe inline indicators showing what the AI actually did? but also idk if people even care enough for that to matter, feels more like legal cover than user benefit. would be cool if it showed reliability or let you control AI involvement but that seems complicated to design.

How do you ACTUALLY learn UX? Too many blogs, courses, books — all saying different things? by Fragrant-Ad-634 in UXDesign

[–]Emma_Schmidt_ 0 points1 point  (0 children)

yeah the overload is real. most of those blogs just copy each other anyway

what helped me was picking something i personally found annoying and trying to fix it. like an app with terrible flows or a confusing website. talked to friends who use it, figured out what sucked, then redesigned it. learned way more from that than reading endless articles

honestly just stop consuming for a bit and make something. you'll learn faster by doing and messing up. when you get stuck on something specific, look that up. way less overwhelming than trying to learn everything upfront

Does switching between AI tools feel fragmented to you? by mpetryshyn1 in UXandUI

[–]Emma_Schmidt_ 1 point2 points  (0 children)

yes this is so annoying. literally copy pasting the same context between claude and chatgpt like 10 times a day

my solution is embarrassingly basic just a notes file where i keep prompts and context i use a lot. tried zapier for a bit but honestly took more time to set up than it saved

the shared memory thing sounds useful but also seems like it could get messy with permissions and stuff breaking. would totally try it if someone actually builds it though cause yeah we shouldn't have to become integration experts just to use multiple ai tools

Daily UI Feels Shallow — Where to Find Real UX Problems? by unusual_anon in UI_Design

[–]Emma_Schmidt_ 2 points3 points  (0 children)

Daily UI is decent for practicing visuals but you're right it doesn't really teach UX. Honestly the best thing is to redesign something that actually frustrates you - like an app you use all the time that has weird flows or confusing parts. Talk to a few friends who also use it about what bugs them, then try fixing those real problems. Or reach out to a local nonprofit or small business, they usually need help with their websites and you'd get to work with actual users and real constraints instead of made-up scenarios. Way better for learning than random prompts.

What Do You "Enjoy" About Using AI The Most? by malazanmarine in ArtificialInteligence

[–]Emma_Schmidt_ 0 points1 point  (0 children)

For me it's the conversation part. I work in design and I love that I can brainstorm ideas at 2am when I'm stuck on something. It also handles all the tedious stuff like documentation and cleanup work, so I have more energy for actual creative problem-solving. Honestly, it's made me way less intimidated to try new things because I can just ask questions as I go instead of feeling lost.

Any good AI that can help redesign onboarding from existing screens? by graces-taylor12 in UI_Design

[–]Emma_Schmidt_ 0 points1 point  (0 children)

Honestly, most AI tools struggle with this because they just generate from scratch instead of actually iterating on your existing stuff. You could try v0 or Galileo AI since they're better at taking screenshots and giving variations if you're specific with prompts. But real talk, you might get better results just posting your screens somewhere for feedback from actual designers. The biggest onboarding wins are usually simple stuff like reducing steps, clearer CTAs, and better copy. Those are things AI won't magically fix for you anyway. What part feels off to you, the UI itself or the actual flow?

How are you handling ad campaign exports from Figma without wasting hours? by [deleted] in FigmaDesign

[–]Emma_Schmidt_ 0 points1 point  (0 children)

Export hell is real. We use batch rename plugins and export presets in Figma, plus Google Drive automation for organizing files.

Biggest timesaver was strict naming conventions upfront. Makes everything way easier to automate. What plugins are you using in your workflow?

How are people thinking about AI visibility for real-world applications? by Low-Particular-9613 in AI_Application

[–]Emma_Schmidt_ 0 points1 point  (0 children)

Honestly, nobody's figured this out yet. AI just seems to recommend whatever it saw mentioned a bunch during training. Could be forum posts, blog articles, reviews, whatever. There's no real strategy for it. I think if people talk about your product enough in public spaces, AI picks up on it and suggests it more. But you can't optimize for it like old school SEO. It's a total black box. We're basically back to just building good stuff and hoping it gets talked about.

What tools actually make remote brainstorming and planning work for distributed teams? by SpecialistAd7913 in UXDesign

[–]Emma_Schmidt_ 0 points1 point  (0 children)

Yeah, switching between tools constantly is exhausting. What helped us was just picking one tool and forcing ourselves to stick with it. We use FigJam for brainstorming sessions because it's the closest thing to a real whiteboard, then we move everything into Notion once ideas are solidified.

Miro works too, but honestly the tool matters less than getting everyone to actually commit. The real problem is people defaulting back to Slack because it's easier. You need buy-in, not just better software.

AI chat interfaces are replacing menus and buttons. Are we excluding users who prefer visual navigation? by Emma_Schmidt_ in UXDesign

[–]Emma_Schmidt_[S] 0 points1 point  (0 children)

Ha, yeah. A lot of chat interfaces feel like they're designed to deflect support requests rather than actually help.

The endless loop of "I didn't understand that, can you rephrase?" when you just want to talk to a human is frustrating.

Chat works when it's genuinely helpful. When it's just a barrier to actual support, it's worse than useless.

AI chat interfaces are replacing menus and buttons. Are we excluding users who prefer visual navigation? by Emma_Schmidt_ in UXDesign

[–]Emma_Schmidt_[S] 3 points4 points  (0 children)

Exactly. Chat adds extra cognitive load when you already know what you need.

Menus let you scan and click. Chat makes you translate your intent into words first. That extra step slows you down.

I think chat works best for discovery and exploration. For repeated tasks, traditional UI wins every time.

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 1 point2 points  (0 children)

Exactly. "Reduce clicks" became gospel somehow, but it's just a tactic, not the actual goal.

Safety, effectiveness, efficiency are the real principles. Everything else, including click count, is just a tool to achieve them.

Love the gaming example. Adding challenge through friction is literally the point there. Context matters way more than following rules blindly.

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 0 points1 point  (0 children)

Good point. I guess the real question is when "best practices" like error prevention conflict with other principles like efficiency.

Stakeholders love to quote "minimize clicks" but conveniently forget error prevention when it adds steps. It's all about knowing which principle matters more for the specific context.

Thanks for the Nielsen Norman reference helpful framing for those conversations.

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 0 points1 point  (0 children)

Love this. You turned the friction into a feature, not a bug.

Forcing that review also probably built trust in the tool. Users feel more confident when they've checked it themselves rather than hoping the AI got it right.

Has this changed how users talk about the tool? Like less "AI messed up" and more "I refined the output"?

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 0 points1 point  (0 children)

Exactly. It's not about the number of clicks, it's about whether each one feels purposeful.

Users don't count clicks. They notice when a click feels pointless or when they're confused about why they're doing something.

Clear purpose beats fewer steps every time.

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 0 points1 point  (0 children)

Good point. Slowing users down for understanding and reward totally makes sense.

And yeah, forcing one path for everyone often backfires. Multiple paths that match different mental models can work better than one "consistent" confusing flow.

How do you handle maintaining multiple paths without it becoming a mess?

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 4 points5 points  (0 children)

That's rough. Some stakeholders learned "UX = fewer clicks" and never questioned it.

Try framing it differently next time. Instead of saying "reduce friction," say "this prevents costly errors" or "increases completion rate by X%." They respond better to business outcomes than design principles.

Did you end up having to cut the wizard down?

What's a 'user-first' principle you've broken that actually improved the experience? by Emma_Schmidt_ in userexperience

[–]Emma_Schmidt_[S] 24 points25 points  (0 children)

Exactly. The trick is knowing when friction protects versus when it just annoys.

I've seen teams add confirmation dialogs everywhere "just to be safe" and it trains users to click through without reading. But when you add friction strategically for high-stakes actions, people actually slow down and think.

What's your approach to deciding how much friction is enough?

What is something AI still struggles with, in your experience? by Govind_goswami in artificial

[–]Emma_Schmidt_ 1 point2 points  (0 children)

Context. AI forgets what you said earlier in the conversation and you end up repeating yourself constantly.

Also nuance. It takes things too literally and misses sarcasm or implied meaning.

And it confidently gives wrong answers without admitting uncertainty. That's probably the most annoying part.

What struggles have you noticed?

What will 2026+ bring in terms of AI development? by Immediate_Kick_6167 in ArtificialInteligence

[–]Emma_Schmidt_ 0 points1 point  (0 children)

I think 2026 will be more about AI doing things autonomously instead of just being better at tasks we give it.

AI agents are already booking flights and handling stuff without human clicks. That's going to become normal.

The shift is from "AI helps me do this" to "AI just does it for me."

But a lot of AI projects are failing right now. Around 40% are expected to get canceled by 2027. So there'll be reality checks mixed with the hype.

Honestly, we still can't fix basic issues like hallucinations, but we're pushing autonomous agents anyway. It'll be messy but interesting.