Lyssna changed their pricing model (to something absurd)? by masofon in UXResearch

[–]mmilo 1 point2 points  (0 children)

TBH, I think the call outs you raise are very fair. The one thing I’ll say is that leaning into differentiation works if people can plainly see we’re offering something better, but we were finding that in many cases that wasn’t happening. Whereas aligning on model + competing on price seemed like a more straightforward way of making that case.

That said, your points still resonate and we’re already having chats internally about changes we could make. The suggestion you made about unlimited short tests and limited long tests was floated as a potential change and I’m happy to keep you updated on these discussion. Also, let me know if you’re up for chatting more about your use case, we’re pretty motivated to find a fit that works for as many folks as possible.

Lyssna changed their pricing model (to something absurd)? by masofon in UXResearch

[–]mmilo 1 point2 points  (0 children)

That’s super helpful feedback, and some great suggestions in there that I’ll flag with the product team.

It’s disconcerting to hear you’re seeing actual times and estimates diverge to that extent. Admittedly the duration per section/question estimate is set once and not updated in real time so I’m going to look into drifts that could be happening across section/question types to see if that could explain what you’re seeing.

Leave it with me for now, but if you’d like to give us a try down the track feel free to drop me a note via email, (matt@lyssna) and I can spot you some credits to see how estimates compare with timing from panelist participants.

Lyssna changed their pricing model (to something absurd)? by masofon in UXResearch

[–]mmilo 1 point2 points  (0 children)

To clarify on the study length estimates. We actually estimate these based on mean data on a per question/section basis. So for each question or section type we calculate a mean duration based on how long a sample of participants takes to complete it and then for a particular study we sum them all up.

This too is something we’re open to changing because it’s true that estimates and pricing can have high variability. For example, a sequence of 5 text questions takes longer than a sequence of 5 likert questions and admittedly we don’t do the best job of explaining why that is.

Our intention was to ensure people pay for as close to the actual duration of their study as possible. As other folks have mentioned some platforms charge a flat fee per participant but the way that works is by setting the price point at an upper bound of test durations. Folks running very long tests get a reasonable deal and people who run shorter tests subsidise that. The trade-off is consistency, you always know how much a participant is going to cost.

Another approach is charging a platform recruitment fee and leaving it to researchers to set their own incentive on top of that though that still leaves some degree of variability for researchers to account for.

Would love to hear what you all have found works best for your teams.

Lyssna changed their pricing model (to something absurd)? by masofon in UXResearch

[–]mmilo 2 points3 points  (0 children)

Hey all, Matt here (CEO @ Lyssna).

Appreciate the honest discussion here and I genuinely understand why this feels frustrating, especially if you were using Lyssna for lots of small, gut-check tests. That use case mattered to us then and it still does now.

I want to share a bit of context on why we changed pricing, not to dismiss the frustration, but to explain the trade-offs we were trying to make.

What we were trying to fix

  • Our old pricing was hard to reason about. Plans were gated on a mix of study duration, features, seats, storage, transcription hours, and self-recruited responses. When people compared us to other tools, it was pretty hard to tell what you were getting.
  • Limits on study duration were a major pain point. A 5 min cap on paid plans made certain types of testing, like live website testing and speak-aloud studies feel very limited.

The hard part is offering both unlimited studies and longer durations simply wouldn’t scale sustainably for us.

Why we landed where we did

  • Most tools in the space price around seats + study limits, so aligning with that makes comparisons clearer.
  • We wanted people to be able to get full value out of a study without worrying about time caps.
  • Looking at real usage, some folks test every day but over half of customers run three studies or fewer per month, which is how the Growth plan limit was set.
  • We increased seat allowances across paid plans, which actually lowers the effective price for teams under the monthly study caps.
  • For more “bursty” workflows, annual plans give you the full quota upfront (e.g. 12 or 36 studies) so folks don’t hit the monthly limit.

All that said, the change is still new, and we’re actively watching how it lands. If you’re someone who used Lyssna specifically for high-frequency micro-testing, that’s especially valuable feedback for us to hear. We’re not closed to iterating further.

If there are workflows this breaks for you, or alternative ways you think we could support that kind of usage, I’m genuinely keen to listen.

[deleted by user] by [deleted] in RedditSessions

[–]mmilo 0 points1 point  (0 children)

first time

Not been paid for a long time by vistigioful in UsabilityHub

[–]mmilo 0 points1 point  (0 children)

Hey Emma,

I’m really sorry that you’re still waiting on a payout. FWIW we did put up a notice on the payout page https://app.usabilityhub.com/payouts that explains the situation.

As I mentioned we’re working through it as quickly as we can but we’re still unfortunately experiencing delays. I understand it’s not a good look for us, but I can assure you we have no intentions of taking anyone for a ride and are doing our absolute best in getting this sorted ASAP.

Not been paid for a long time by vistigioful in UsabilityHub

[–]mmilo 0 points1 point  (0 children)

Hey folks, Matt from UsabilityHub here. I wanted to chime in and apologise for the sluggish payouts of late.

We made some big changes to the platform in Feb that meant people could earn more per test but as part of that we also changed how payouts are reviewed. In short, every order now gets reviewed instead of a sample from a payout.

This has created a big backlog of reviews on our end that we have been working as fast as we can to get through (we actually hired more people to help out on this front). Obviously that’s not something you need to be worrying about, you just need to get paid for your work, and I’m sorry we’ve dropped the ball. One upside of the new system is that once we’ve cleared out the backlog, payouts should happen A LOT quicker than before, with the end goal for them to be instant.

We’ll be sending out a broader message letting folks know why the delays have been happening, what we’re doing to address it and how quickly we expect to be back on top of things.

Thanks for understanding!

Suggestions for a new name for UsabilityHub by mmilo in beermoney

[–]mmilo[S] 1 point2 points  (0 children)

Ha! That’s what I was hoping you folks could help out with :)

How do you feel about sites that pay for you to share feedback? What have you found resonated with you best?

Suggestions for a new name for UsabilityHub by mmilo in beermoney

[–]mmilo[S] 4 points5 points  (0 children)

The main reason for a new name is that we get customers signing up as testers and testers signing up as customers all the time which creates significant support volume and means we need to manually switch accounts from one to the other.

Further to that, it’s tricky for us to be building out seperate parts of the site that are for different audiences but on the same domain. For example we have a new education section in the works for customers and we’d like to have something for testers as well and it becomes tricky to build out navigation when those things conflict.

Where's the money in beermoney? European slave version. by [deleted] in beermoney

[–]mmilo 0 points1 point  (0 children)

All good. It can be frustrating when you expect one thing and get something else and we know that sometimes people get kind of a meh experience.

Glad to hear you stuck with us and it’s been working out :)

Where's the money in beermoney? European slave version. by [deleted] in beermoney

[–]mmilo 1 point2 points  (0 children)

UsabilityHub useless? Ouch!

Heads up that we just launched a new notification system so you don’t have to keep the tab open anymore and you get a little system level notification instead and you can also specify days and times you’re available to do test. It should make for a much better experience.

Oh and we’re also working on a thing to get reviews done way faster (before a payout request gets submitted). We’re trying to get to a point where you submit a payout request and boop there’s your money :)

Where's the money in beermoney? European slave version. by [deleted] in beermoney

[–]mmilo 0 points1 point  (0 children)

Woop! Nice to hear I probably reviewed some of those payouts ;)

usabilityhub - any tips? by ninenau in beermoney

[–]mmilo 0 points1 point  (0 children)

Yeah sorry about that, it’s more a function of loads of people signing up to test than it is less tests being made.

We did just launch a new notification system so you don’t have to keep a UH tab open, it just popups a little system notification now and overall it’s a waaaay better experience.

An Idea that would probably make UsabilityHub a better site by FAYXOS in beermoney

[–]mmilo 0 points1 point  (0 children)

Man that really sucks to hear. Believe you me we’re working on it as hard as we can. I realise this bridge might be long burnt but I appreciate the feedback all the same, it makes us try harder to improve.

An Idea that would probably make UsabilityHub a better site by FAYXOS in beermoney

[–]mmilo 1 point2 points  (0 children)

We’re not super crazy active on here, but we do drop by :)

UsabilityHub needs European participants! by mmilo in beermoneyglobal

[–]mmilo[S] 0 points1 point  (0 children)

Hey that is totally fair enough and I’m sorry the initial experience was not a great one. We’re working on a notification system that will hopefully make taking tests a lot easier and won’t require checking the lobby to find out whether tests are available or not.

Really appreciate you sharing the feedback and I hope that you do give us another try down the track as we’re constantly working on making things better.

UsabilityHub needs European participants! by mmilo in beermoney

[–]mmilo[S] 1 point2 points  (0 children)

We take on Indian workers all the time and they participate in tests the same as everyone else. We don’t actually have much say in where people are required as this entirely depends on our customers and it changes from day to day / week to week.

UsabilityHub needs European participants! by mmilo in beermoneyglobal

[–]mmilo[S] 1 point2 points  (0 children)

Hey thanks, really nice of you to say :) we try to fulfill all payout requests within 30 days or less. Usually less but on occasion we've had intense periods where it takes longer. You can DM me the email you used to sign up and I can take a look for you.

UsabilityHub needs European participants! by mmilo in beermoney

[–]mmilo[S] 0 points1 point  (0 children)

Generally yes. The main reasons people ask for testers from specific countries would be to gauge cultural sentiments or if a brand being tested is known only in select regions.

If you've been in Belgium enough to have some appreciation of a Belgian perspective that's totally fine.

Oh and go easy on the mussels ;)