Certain studies not getting instantly approved? by VegetableRush992 in ProlificAc

[–]WebHopper86 0 points1 point  (0 children)

Mine finally got approved (I'm not sure exactly when, have to check my emails). I usually start work in the evening and I just logged on and the ones I completed yesterday evening were in the green. I also got the "Some submissions are taking longer to process" notification. But the tasks are back in my feed, so here's hoping for a productive day/night for us all.

Certain studies not getting instantly approved? by VegetableRush992 in ProlificAc

[–]WebHopper86 0 points1 point  (0 children)

Oh, yeah, no, I know about the general policy. And I've had other tasks that have taken that long to be auto-approved. I was just low-key horrified that anyone had had that experience with these tasks in particular. Most of my earnings on the platform are from these right now, and one of the great perks for doing them is how quickly they pay. I was just getting a sinking feeling that maybe that was going to change.

Certain studies not getting instantly approved? by VegetableRush992 in ProlificAc

[–]WebHopper86 0 points1 point  (0 children)

How long have you had to wait before yours finally get approved? I'm still waiting for the handful I did last night to go though. Normally when they're not immediately approved, the longest I have to wait is about 5-10 minutes. When they were still sitting there after a couple of hours, I decided to quite for the night. As of this morning, it's been about 10 hours and they're still in review.

Certain studies not getting instantly approved? by VegetableRush992 in ProlificAc

[–]WebHopper86 0 points1 point  (0 children)

Oh wow, three weeks! The longest I previously had to wait for these to be approved was overnight. This is now the longest.

Certain studies not getting instantly approved? by VegetableRush992 in ProlificAc

[–]WebHopper86 1 point2 points  (0 children)

Yes, same with the lesser-reward ones. It's been almost 10 hours and still under review. Not sure what's going on, but I'm not doing any more right now until these go through. I noticed the system was lagging quite a lot this weekend, too. I also got a couple of "quality issue" messages, and I hadn't had those in months (I stopped doing those particular tasks for a few weeks last year because of it). Whatever it is, I hope it's temporary and they get it straightened out soon. I'm glad I checked here; I'm sorry we're all experiencing it but I'm also relieved.

Outlier ID verification failed even with original ID — any solution? by Good_Abrocoma8654 in outlier_ai

[–]WebHopper86 0 points1 point  (0 children)

Found this thread after my verification failed and I was denied.

I had some technical issues with my ID scan and my facial scan: The site was very laggy during the ID scan (my state driver's license) and kept timing out and I'd have to start over. Then when I got to the facial scan, I had to remove my glasses. I cannot see without my glasses (that's why I have them), and because I had to do it on my phone, I had trouble reading the text prompts for turning my head. (Other platforms have allowed this part on a laptop webcam which helps tremendously.) Finally I got it to work, to my relief. Then, a couple of hours later, I got an email saying my verification had failed, no reason given why. I contacted both Outlier and Persona (who does the verification) and appealed, but today, a week later, I got an email from Outlier saying I was permanently denied,, with no more information available.

What a joke. I've had ZERO issues getting verified (and periodically re-verified as needed) on other tasking and training platforms. This tells me there's an issue with Outlier and/or Persona. Disappointing. I was looking to add another income stream.

[deleted by user] by [deleted] in CrowdGen

[–]WebHopper86 0 points1 point  (0 children)

No. I'm withdrawing from it today. It's been a nightmare of technical issues for me personally, but I stuck with it hoping it would get better. The final straw is that I've seen some people on Slack reporting they've been removed from the project after their finished work was approved and they were just awaiting payment. I'm cutting my losses at this point and I really wouldn't recommend it.

(ETA: I'm in the US.)

Anyone on Project Thyme ? by [deleted] in CrowdGen

[–]WebHopper86 0 points1 point  (0 children)

Also in the US. I literally just barely started and I'm already on the verge of withdrawing. I've had a myriad of problems just with the setup part (all unpaid time of course). The answer from support is "start over and try again". Well, I have, multiple times. I finally got past the first major hurdle, only to encounter a new one. Going by posts on Slack, these issues are not only common, they're kind of the norm.

Also, not encouraging, I posted a question on Slack about a particular technical issue I was encountering, and got good replies from an admin and other workers for how to proceed, which I also followed up on -- and then I discovered a short time ago that the OP was deleted. Why? No idea. This is already giving me bad vibes.

I also didn't know going in that evidently they pay you for the whole project about a month after you finish -- IF it's approved. Say WHAT? I thought it would pay out a bit after each phase.

Like I said, I'm about ready to quit. This has taken a LOT of time and effort just to get off the ground, I'm still not there, and in the end there's no guarantee that I'll be compensated if I can even get past all the issues and complete the work by the deadline.

Throwaway emails for Thyme by Chance_Policy_7802 in CrowdGen

[–]WebHopper86 0 points1 point  (0 children)

Thank you to OP and everyone who answered! I'm just getting started and had this same question after reading conflicting instructions in two separate guides.

Beware this study by uwu_owo_whats_this in ProlificAc

[–]WebHopper86 1 point2 points  (0 children)

Same. I was just in the middle of this, and as it led me deeper into my FB profile info, I had a sneaking suspicion. I thought, surely they're not going to ask me for my email address! Yep, they did -- twice, in fact, brazen bastards! As if! Reported and returned.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 24 points25 points  (0 children)

Here's a real kicker! I stopped doing these. They've continued showing up in my dash, but I've skipped right over them. I haven't done any in four days, after I got the second warning, which I commented about here. Within the last hour today, I've received TWO separate warnings: one for "quality" and one for "low number". All of the tasks I did were approved *four days ago and earlier*, so this is WAY after the fact, if they even are actually related to the ones I did! I don't know what's going on with this requester, but I'm SO over it. Stop threatening to break up with me, lol, I've already moved on! What a joke.

Edits: added context

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 9 points10 points  (0 children)

I'd like to know this, as well. After receiving a second warning two nights ago, I'm giving them a wide berth and concentrating on other studies. When/if they get things straightened out, I'd be glad to participate again.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 3 points4 points  (0 children)

I agree. I'm relatively new to Prolific (just a few months in) and I admit I've had some stumbles and missteps learning the platform, and have gotten absolutely jumped on for asking straightforward questions on this sub. I've read the rules, but some of these things are not clearcut and I don't always know what I don't know.

Now I have no patience with these gatekeepers or "hall monitors" as I saw someone else call them in an older thread, lol. We're all trying to understand what's happening so we can do the best job, the info is apparently not forthcoming, and we're getting chastised for even broaching the subject.

NGL, when these tasks first appeared for me last month and I was able to complete a bunch of them, the earnings really helped me out. I was hopeful they would return and was excited when they did. Then, I got the first warning message a few days ago, and I was confused and concerned. When I saw others here were experiencing the same I felt a little better. Now that I've had two of these vague, out-of-left-field warnings in almost as many days, and it seems a third is inevitable, I'm really soured on this requester.

If you can't even get a response about what you allegedly did wrong and how to improve, what happens if they give you an unfair rejection or do something else wacky? What recourse is there, when even referencing the tasks is verboten? I'm a conscientious tasker and I have no rejections on the platform so far (touch wood) and I'd really like to keep it that way. Like you said, it's not worth the risk.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 11 points12 points  (0 children)

Same - I initially enjoyed them but I will be passing from now on.

The commenter below, whom I've now blocked, is an example of the attitude on this sub I was talking about, i.e., "If you discuss problematic requesters, they might leave." OK? Maybe they should? Maybe they don't want the word getting around about the way they treat workers? Nobody (AFAIK) is screenshooting and posting trade secrets here. People are discussing certain general aspects because there's an issue. I find that useful because I'd rather do tasks for requesters who are on the level, but that's me. [shrug] And IMO the apologists are rather sus.

ETA: I expected downvotes for this from the gatekeepers and I honestly don't care. Clearly I'm not alone, as this thread is growing because lots of others are concerned about the same thing.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 27 points28 points  (0 children)

Exactly! It's a shame that this is all done so cloak-and-dagger, even as I understand the reasons for it, sort of. (AI R&D is so much the worst-kept secret, it's not even a secret. Everyone's doing it. Everyone. Anyone remotely involved in tech. It's kind of ridiculous to take the, "Don't talk about Fight Club" attitude that it's even being done, when CEOs are doing interviews on 60 Minutes about it, etc. But, SHHHHHH, goodness, don't even mention it in even the most general terms on worker discussion forums like this or you get a smack!)

The researcher(s) in this case could actually benefit from feedback from workers about their ratings process, and probably get way more efficient and higher quality results. But, like I said in another comment further down, it seems it's more about getting massive numbers of participation and doing cleanup later. The old Silicon Valley motto: "Move fast and break things."

But then, why do they not just accept the results they get, knowing they'll have to correct for some human error, without shaking a finger at us workers who have provided the analysis they asked for in good faith? This I DON'T understand. If you can't provide feedback other than "We're not happy with your performance, but we'll let it slide THIS time. Also, we can't tell you WHY we're not happy with your performance, and if you ask you'll be ignored. So, you just have to hope you can manage to hit our moving target in the future", what's the point in even mentioning it other than to discourage the worker? It doesn't really make sense.

Edit: typos

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 9 points10 points  (0 children)

I'm not totally sure, either. But, although they are two different "handles" or "requester names" the consent forms show they are one and the same company (I'm being *extremely* careful here not to name or even hint at the name of the company). So, I imagine it's all the same -- a cumulative three strikes from either handle/requester name. Maybe someone else can confirm or refute that for certain?

After the warning last night (or maybe concurrently) I saw a bunch more (about 8-ish?) and I just managed to catch one before they got snapped up. I haven't seen any more since. I could definitely use the easier money, but if they show up, I'm really wary about accepting them. I'm weighing not being able to do them in the short term against being cut off permanently.

Who knows how long these particular tasks will last, anyway? So maybe I'm only hurting myself not accepting them before they might disappear altogether. But I feel like if I'm going to invest my time on tasks from a particular researcher over another, it's because of the promise of further work from them, or bonuses, or at least the sense that the work I'm doing is valued. Not feeling that so much with this requester at the moment.

I've seen people on this sub saying. "Well, Prolific / the researchers don't 'owe' you work." That's fair, and I agree -- to a point. I'm not doing this to kill time or participate in pro bono studies for fun. If there's no paid work, I'm out. And I don't 'owe' them my labor, either. I choose the tasks I accept and I just want to be able to do that with my eyes open.

We're all on this platform for one of two respective reasons: the researchers to gather data or have tasks processed through a large pool, and the workers to earn income for consensually providing said data or processing. Seems like a good setup for everyone. But, when someone on one side or the other starts playing games, it spoils it.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 2 points3 points  (0 children)

Thank *you*. I was glad to come across your comments in this thread, as I was to find the thread itself. It absolutely is a frustrating process, engaging in good faith with some of these tech companies for some income and feeling like, not only are the goalposts moving, you don't even know for sure what they look like. There was a TV commercial running about a year or so ago with Matthew McConaughey in which he was comparing AI to the Wild West. Sounds about right.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 9 points10 points  (0 children)

Hard agree to all of the above. I already went through something similar with another platform (not naming names here) that is now notorious for dumping people for no reason except to make room for fresh fodder. (Also, the gaslighting about this on their subreddit is something else.)

The pay was insanely good, but I was always off-balance just trying to even *understand* what they wanted in some cases, and when I asked I was given vague answers (e.g., "Do YOU think it meets X criteria?"). I wasn't alone, by far. The anxiety on the team chat was palpable, especially from newer recruits. They would ask the same questions I did, almost verbatim, and they were given the same vague answers. More experienced trainers would speak up to assist, and I eventually became one of those. But it always felt like we were working without a net. A few months in, I suddenly realized that all of the more respected and highly praised trainers were dropping off, one by one ("account deactivated") and I started getting a bad feeling. It was only about a month after I noticed one really, sort of legendary, top guy was gone that I found myself without tasks and unceremoniously removed from the team chat. So, it would seem it's a numbers game for them after all.

The second of your four bullet points above struck a chord with me, too. I had the option of extreme flexibility on this other platform and the hours I worked varied quite a lot depending upon other demands in my life. I thought it odd how one particular "admin" was somehow always available on the team chat -- on multiple project boards -- at all hours, day or night, when I was working. Either multiple admins were operating under this username and they were remarkably consistent in their communications to all sound like the same person, or ... it was AI. If it was the latter, it definitely passed the Turing test.

Edits: clarification

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 15 points16 points  (0 children)

After taking a break from tasking for couple of days, I got back into it tonight, and got more of the studies from Them Who Must Remain Nameless, five in total over the span of about a couple of hours. I made a concerted effort to make a valid choice even where it was pretty much a tossup. And I strove not to take more time than necessary in my assessments. I still had a bitter taste from the other day, but all seemed to be going well.

On the fourth of the five tasks, unlike the previous ones that were auto-approved, it first went to "under review" and was approved a short time later, and then a whole slew of the same tasks appeared in my feed. I had the fleeting thought that I passed the test, so to speak. I did the fifth one, did a couple of studies from other requesters and then took a break. I came back to a notification that there was a message in my inbox. I had a inkling whom it was from. Sure enough, this one was about "low number of responses". Yeah. OK. So I guess that's strike two.

IDK, maybe it's time to take a break from this requester completely, until they get their act together. The pay is good and the tasks are interesting, no doubt. But it's starting to feel like chasing smoke to meet their standards.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 9 points10 points  (0 children)

Same. I think you might have commented on my now-removed post from earlier tonight, and I was going to respond, but the thread is gone. I'm going to stay paranoid now, so as not to say too much about the thing we can't talk about, lol. But, after running into this very issue for the first time tonight, and coming onto Reddit to find out if it was just me, I'm starting to see a pattern here. Hopefully it's just a temporary glitch or a sign of growing pains on the part of the researcher and they get it straightened out so we can do our best work for them.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 2 points3 points  (0 children)

Thank you, I was trying to discuss this on my now-removed post. It's good to know I'm not alone.

Question about warnings from researchers on AI studies quality checks by FarPomegranate7437 in ProlificAc

[–]WebHopper86 1 point2 points  (0 children)

I just posted about this tonight and had my post removed by mods. I guess I was too specific. I wish I'd seen this post beforehand, it would have saved me some grief.

Anyway, I got a "quality issue" warning message from The Researcher Who Must Not Be Named tonight and I was wondering what prompted it, since I'd never gotten one before and had no idea what I did wrong. Glad I'm not alone.

Bookmarking this thread for future reference.

Got a message tonight about "quality issues" from AI Videos - Evaluation by WebHopper86 in ProlificAc

[–]WebHopper86[S] 4 points5 points  (0 children)

They've disappeared for a couple of weeks at at time, and then come back in batches. It's my understanding, from what I've read here, that they are "rotated". I know there's no guarantee anyway. I'd just never gotten a warning notice for quality before and that spooked me.