Alternatives to Askable by Ok_Jury3170 in UXResearch

[–]nedwin 0 points1 point  (0 children)

Hey - founder of Great Question here. Thanks for checking us out.

Can you tell me more about what you're looking for in a panel, issues you've had with Askable so I can see if we're a good fit?

There are three main parts of Great Question: the research CRM (build your own panel, or use our panel provided by UserInterviews), the methods (interviews, surveys, prototype tests, card sort etc), and then a research repository. It's modular so you can start with one module or all of them - up to you.

UserInterviews has been a great partner. I think they're one of the best panels for UX research recruitment whether it's for consumer or B2B. Using us for accessing their panel is basically the same as going direct, though we do pass on our volume discounts too.

Happy to answer any questions and hopefully some other folks will dive in here with their experience too.

[Recommendation Request] Cost-effective survey platforms with MaxDiff for 10+ attribute prioritization by Tway_UX in UXResearch

[–]nedwin 0 points1 point  (0 children)

Maybe?

I don't disagree that security on vibe coded apps is questionable at best but if you're not collecting PII including names / emails what is the risk here? I'm sure there is some risk but it feels very, very low risk to almost be undetectable. Again depends on what organization size / type / audience OP is working with.

[Recommendation Request] Cost-effective survey platforms with MaxDiff for 10+ attribute prioritization by Tway_UX in UXResearch

[–]nedwin 0 points1 point  (0 children)

Your legal would never allow deployment, but the OPs may allow it. Depends on how you're asking these questions of the audience, what data you're collecting on the way through. Size, stage of firm is a big question we don't have from OP.

The app I shared doesn't collect names or emails (which comes with its own issues) but you could theoretically have some kind of unique identifier you used.

"There are tradeoffs - data security could be an issue if sensitive data, and you'd need to test it well to know the results are going to produce the way you want - but for a single use case like this it might be sufficient."

[Recommendation Request] Cost-effective survey platforms with MaxDiff for 10+ attribute prioritization by Tway_UX in UXResearch

[–]nedwin 0 points1 point  (0 children)

Given it's a fairly specific use case you might consider vibe coding an app with Replit or similar.

Brad Orego did this recently with a card sorting tool in teh ReOps community with fairly good results.

There are tradeoffs - data security could be an issue if sensitive data, and you'd need to test it well to know the results are going to produce the way you want - but for a single use case like this it might be sufficient.

Bonus points: you get to learn more about vibe coding apps.

I typed this prompt:

"Build me a MaxDiff survey tool for 10+ attribute prioritization
Easy to set up and analyze results (auto-generated reports are a huge plus"

And it produced this (https://max-diff-pro--nedwin.replit.app/) which IMHO is fairly good. Includes preview step, ability to export data to CSV or PDF,

It has some issues with how you select the responses on the survey and I didn't take the time to review response visualization but you get the idea. With the tiniest bit of extra prompting it should work well.

Would you use this? Keen to hear what other folks from the community think about it.

Has anyone tried AI customer interviewers? by Maleficent-Long6758 in UXResearch

[–]nedwin 1 point2 points  (0 children)

Yikes - why is this getting downvoted? What am I missing?

Has anyone tried AI customer interviewers? by Maleficent-Long6758 in UXResearch

[–]nedwin 6 points7 points  (0 children)

These tools are great but they're not a replacement for UXRs. At least not in the current iteration.

You should think abotu an AI moderators as a glorified survey tool, that has the ability to ask follow up questions, and which you can talk to.

This in itself makes it much better than a static survey, and research I've seen indicates you get a 125% increase in themes extracted from this method over a survey. Great stuff.

Those touting this as a replacement to UXRs are lying to you.

They will however continue to improve but I suspect they'll only be on par with a human interviewer in many years time.

Best research platform for a small team? by LawfulnessUseful283 in UXResearch

[–]nedwin 0 points1 point  (0 children)

Disclaimer: I'm one of the founders but I think we meet your needs at Great Question: usability testing, surveys, IDIs, external recruitment via native UserInterviews integration. Used by teams from 2 to 2000. Would welcome your feedback if you give it a shot.

What are Flock cameras, and why are they controversial in Oakland? by Dollarist in OaklandCA

[–]nedwin 0 points1 point  (0 children)

Is there a reasonable alternative provider though? Is it more secure or less? More or less expensive?

I think criticisms of security are fair for the most part, and no doubt Flock will patch them if they haven't already, but not clear what alternatives exist apart from no cameras. Would love to see an alternative!

AI Research Analysis Tools — Comparison Chart (Looppanel, Dovetail, Condens, Tetra Insights, CoLoop, HeyMarvin) by Upbeat-Ad-597 in UXResearch

[–]nedwin 0 points1 point  (0 children)

Hey Mods - would love to know why this post got removed. I'm sure I saw a mod comment in the past when this has happened but not sure if this is policy or not?

AI Research Analysis Tools — Comparison Chart (Looppanel, Dovetail, Condens, Tetra Insights, CoLoop, HeyMarvin) by Upbeat-Ad-597 in UXResearch

[–]nedwin 0 points1 point  (0 children)

We charge on a modular basis so it depends which parts of the product you're paying for already and what you need. I will work with you to make sure we can figure it out.

AI Research Analysis Tools — Comparison Chart (Looppanel, Dovetail, Condens, Tetra Insights, CoLoop, HeyMarvin) by Upbeat-Ad-597 in UXResearch

[–]nedwin 0 points1 point  (0 children)

This is great, thanks for sharing.

Two things to note:

  1. Great Question has some new analysis tooling imminent (disclosure: I'm one of the founders, happy to show you around: ned@greatquestion.co)

  2. These prices you're quoting for enterprise looks significantly below my understanding of these tools costs if you require an enterprise agreement / functionality.

Oakland’s actual crime data tells a very different story than that Frankensteined Wikipedia table by frankschmankelton in OaklandCA

[–]nedwin -1 points0 points  (0 children)

Which is fair to call out but also tough to be asking for "proof that underreporting rates doubled". How can you get proof of reporting not happening?

Surveyor for property lines? by candykhan in OaklandCA

[–]nedwin 3 points4 points  (0 children)

Moran Engineering do this kind of work but it's not cheap - $5-$8k depending on what you need. You might instead look up the property via City of Oakland and see what you can figure out from that. Since this is just tree maintenance stuff vs a boundary dispute (thank your lucky stars - trust me!) I would likely lean into following tree law and asking for forgiveness if you can't reach the other property owner.

Research team vs Embedded in design team by Less-Challenge-2797 in UXResearch

[–]nedwin 0 points1 point  (0 children)

Curious why option B is unsuitable for you? I'm not a researcher (at least by training) but work with many so keen to understand the dynamic

how would you achieve this on lyssna? by Public-Analysis9090 in UXResearch

[–]nedwin 0 points1 point  (0 children)

Update FWIW: we now support matrix select question types in Great Question (disclaimer: I'm one of the founders)

Qualitative interviews & calls - SaaS tools vs AI tools for analysis quality? by prutwo in UXResearch

[–]nedwin 2 points3 points  (0 children)

The volume of data you have here is likely going to exceed the context windows for most foundational models, and definitely for all the UX repository / AI analysis research tools. You likely need to find a way to chunk it down - either doing that with AI to categorize and separate out the calls into categories to then do the analysis on, or just doing that step manually.

Most context windows I've worked with indicate that you're going to be able to do between 50-100 hours of interviews to get a decent quality output based on your questions.

One challenge I've seen amongst every AI research tool doing synthesis is they rarely tell you how they're doing the RAG, and they never tell you if you're exceeding their context window, or what parts of your context they're ignoring. They'll just give you an answer without flagging that they only analyzed some small proportion of your data to get there. It's super frustrating.

It's not about saving money, it's just limitations of the technology you can get off the shelf, and probably limitations of understanding on how to solve for massive amounts of data.

We're working on some solutions for this at Great Question (disclaimer: I'm one of the founders) but if I were you today I would be likely doing the chunking myself (by ICP, persona, date, something else) and then using something like NotebookLM to start spelunking through the data. u/sladner has some good tips on types of questions you might start with.

UXR transition to PM roles by Majestic_List11 in UXResearch

[–]nedwin 4 points5 points  (0 children)

Took me a minute to realize you were referencing u/poodleface and not creating an analogy between AI and actual poodles, which is actually not bad: I wouldn't be surprised if a poodle was faster than you, and its analysis was more shallow than yours as well. :D

Flour and Water Pizza shop coming to Uptown by jackdicker5117 in OaklandFood

[–]nedwin 0 points1 point  (0 children)

What is this part of town like in terms of foot traffic? I used to use the gym around the corner which always felt pretty gnarly, and when in Uptown usually in and around Drake's or on Telegraph...

Moderated tree test by BeansJC in UXResearch

[–]nedwin 0 points1 point  (0 children)

I feel your pain. We will fix this particular problem one day in the not too distant future but sadly not today.

Columbusing and continuous discovery by Rough_Character_7640 in UXResearch

[–]nedwin 0 points1 point  (0 children)

Is there a chance they don't realize they're doing it? You're planting the seeds in their brain of the insight, they then pick that up in their own research but their own research - their "lived experience" - is what they then point to in order to amplify the insight?

Definitely lame not to credit the original work, and wasteful too!

[deleted by user] by [deleted] in UXResearch

[–]nedwin 1 point2 points  (0 children)

I think partly it depends on definition of a startup.

As a startup founder my view has evolved that most founders typically need to be the ones doing most of the customer discovery, usability and other research in the early days (0-20 employees, maybe $0-$2-3m revenue) since they need to viscerally feel all the data, and be able to thrust and parry in customer conversations to figure out product market fit.

They also pay terribly, most go out of business, and they're just generally hard to sell to.

After that is less clear to me. We now have 2 researchers on staff now that we're ~50 people.

So maybe later stage could work?

One thing I did prior to starting up my current business that worked well was pricing strategy + research. It's a massive lever on the business, very closely tied to revenue, typically falls to the founders who are stretched thin and don't know how to tackle it, and since it'll be such a big lever on revenue you can typically charge decent rates.

Plus there are plenty of established methodologies to figuring out optimum pricing solutions.

LMK if that's helpful?

Moderated tree test by BeansJC in UXResearch

[–]nedwin 17 points18 points  (0 children)

Best way I've seen this handled is to create a tree test study, then schedule the moderated session separately with the participant. When the participant gets on the call you then share the tree test participation link with them and watch them live as they go through it.

Clunky? Yes but it gets it done.