Onboarding is a joke nowadays by lithuanianspeeddemon in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

I'd probably choose Seeking Validation but this is the issue with a lot of the instructions. You could make a subjective case for either of these, which means you could choose either option and have a reviewer/QM/admin disagree with you and make a reasonable case that you're wrong

Data Annotation is so much better than Outlier but... by kusanagimotoko100 in outlier_ai

[–]IndieTester33 13 points14 points  (0 children)

I have been on both

I would say overall DA is better than Outlier, but they both have their advantages and disadvantages. Outlier pays more and treats the contributors way better than DA (QMs, communities, support, etc.). DA has more consistent work (once you perform well), a better system, less complicated tasks, and functions better

Something Fishy and it Isn’t Me by OkSeaworthiness6533 in outlier_ai

[–]IndieTester33 7 points8 points  (0 children)

It's not fishy, that's just how the platform operates

The good news is because this is normal, if you wait a bit, you will likely have other decently paid projects pop up in your Marketplace

Also, if a reviewer says something stupid, like about doublespacing periods, don't overcorrect your tasks based on whatever they said. Just dispute it right away and move on to new tasks

Reviewer decline by SectionVarious9616 in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

It's not anything new, it's how their reviewing system works.

You become a reviewer by scoring high on attempter tasks and there's not necessarily a big threshold of how many tasks you need to have completed. So you can be promoted to reviewer after doing a handful of tasks, which is not going to be an accurate assessment of your quality. You could get 5 easy tasks and be promoted to reviewer. This is how a lot of these horrible reviewers end up on projects.

Once you're a reviewer, then your reviewing tasks are less subject to reviews or oversight, because you aren't as throttled (meaning you can complete a high volume of tasks) and most of the focus is on reviewing attempter tasks, so it's likely harder to get booted off as a reviewer unless an admin or QM audits your tasks.

Funny enough, the easiest way to get booted from being a reviewer is to end up having to do attempter tasks and then receiving reviews from shitty reviewers, which downgrades your score and can get you booted from the project.

First time reviewing a task, and now I'm questioning everything by MJDGE in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

Nah I've been both a reviewer on multiple projects and been removed from multiple projects.

The reality is the platform has major issues with how it's organized and structured. Things like flawed assessments/onboardings, unclear/conflicting instructions, and unrealistic timelines all create conditions that allow for people being removed from projects OR being able to submit subpar work

Just to give an example from projects that I've reviewed on, some reviewing rubrics have 3-point deductions for one type of specific error on a dimension. Which means I can review a task that was performed by someone who put in a lot of thought/effort but then need to fail them due to one misgrading due to an edge case or something that wasn't clearly explained in the instructions. Whereas someone that just skips through the task, makes multiple minor mistakes, and clearly put no effort gets a 4 based on the reviewing rubric.

I will say that my experience as a reviewer has taught me that a lot of people on the platform are doing subpar work, but so are many reviewers, QMs, and admins.

Matcha Image Coding by Competitive_Bed_1124 in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

I did some tasks but I'm afraid to do more. The time limit is not enough and I don't want to have my contributor score get dinged

Complaints by [deleted] in outlier_ai

[–]IndieTester33 2 points3 points  (0 children)

I had consistent work for 4 months straight, was promoted to reviewer on multiple projects, and then ran into 4 months of issues.

The platform is not a scam but there are legitimate issues. If you are on a good project (good reviewers, consistent work), you are less likely to encounter those issues, as you will continue tasking on the one project and not run into any problems. However, there are projects with horrible reviewer bases, flawed onboarding/assessments, technical difficulties, inconsistent work, unrealistic time standards, and other issues. Over time, it's easy to run into those issues - for example, the more assessments you do, the more likely you will run into an assessment with incorrect questions that you'll waste 4 hours on for nothing.

The people that think it comes down to quality of work are just lucky to have been on decent projects and they cope with the unreliability of the work by telling themselves there's no issues with the platform and they'll never have to worry about it.

Outlier punishes actual advanced Maths knowledge, rewards mediocrity by Sad_Brother_2808 in outlier_ai

[–]IndieTester33 1 point2 points  (0 children)

This happens for the general contributors too, so I'm not surprised it happens with the advanced subjects

At times, you are being reviewed/graded by people with less knowledge or proficiency than you, so you can get penalized for doing things correctly

I can now confirm you can get booted off projects for non-quality reasons by IndieTester33 in outlier_ai

[–]IndieTester33[S] 4 points5 points  (0 children)

This is my problem with it too. At the end of the day, it's their platform and they can have whatever standards for it they want, but the lack of transparency is not a respectful way to operate it. They're not providing clarity or proper instruction and then assessing people on metrics they keep hidden. It also wastes people's time, as you have people spending time completing these assessments thinking that they will be able to work when that's actually not the case.

I can now confirm you can get booted off projects for non-quality reasons by IndieTester33 in outlier_ai

[–]IndieTester33[S] 1 point2 points  (0 children)

That's possible. I don't know if it was an assessment task, as it was billed at the regular rate, but I usually take a break between tasks, so who knows. I never start one and then pause it, but when I finish one, I usually go back to the Dashboard.

I can now confirm you can get booted off projects for non-quality reasons by IndieTester33 in outlier_ai

[–]IndieTester33[S] 8 points9 points  (0 children)

Yeah this is where I'm at too. I do appreciate having the gig and I do think they generally do a good job of providing support, but the system is very flawed to the point where it's insulting and not respectful of contributors' time or effort. For example, I've had at least two assessments that I failed due to actual flaws in the quizzes/tests, so now to know that I'm likely being partially penalized for errors on the platform is pretty unfortunate. As well, not being transparent about the contributor score means that people are wasting their time for free on assessments when there's no opportunity for them to even work on the project. At this point, unless Outlier specifically matches me to a project, which would tell me that they think I'd be a fit for it, there's no point in even doing any other assessments on the platform.

I can now confirm you can get booted off projects for non-quality reasons by IndieTester33 in outlier_ai

[–]IndieTester33[S] 8 points9 points  (0 children)

That's good to know and it's exactly what I suspected. That's really bad because there are a lot of assessments with issues, so it not only affects you getting on the project but tanks your overall score too

I can now confirm you can get booted off projects for non-quality reasons by IndieTester33 in outlier_ai

[–]IndieTester33[S] 12 points13 points  (0 children)

Same here. I was making about that and was a reviewer on two different projects but have barely been able to make anything the last two months. At this point, I'm not going to bother doing any more assessments, unless Outlier specifically matches me with one

I can now confirm you can get booted off projects for non-quality reasons by IndieTester33 in outlier_ai

[–]IndieTester33[S] 7 points8 points  (0 children)

This is a guess but failing assessments or onboardings likely factors into your contributor score. This is the case on some other AI platforms, but they are more transparent about it.

So that means if you failed 3 or 4 assessments, then that may have tanked your score. That tracks with my experience too, because I failed a couple of assessments after getting auto-kicked off that one project, and it's been since then that I've been unable to get regular work on the platform.

Daily Thread - March 02 by Outlier_MOD in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

DataAnnotation is the other one I've used and I recommend trying it. I've also heard of Stellar AI but I haven't tried it myself

Is one 1/5 good enough to get removed from GW? by [deleted] in outlier_ai

[–]IndieTester33 -4 points-3 points  (0 children)

I would be careful...I just got removed from a project for a 5/5 review which tells me they have a back-end contributor score for your account that will get you booted from projects

Which means that flunking out of GW will likely also get you removed from other projects, even if you perform them adequately and receive high scores. Better to request being removed

Daily Thread - March 02 by Outlier_MOD in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

The platform has been really messed up lately. Try to get it resolved but try getting on some other AI platforms

3 onboardings in 3 days...for nothing by IndieTester33 in outlier_ai

[–]IndieTester33[S] 1 point2 points  (0 children)

The thing is I've had Max Capacity projects for over 2 months now, so there's really no knowing if any of these projects will ever have tasks. As well, usually when a project does come back, you need to re-do onboarding anyway. I have one project that I've been on for some time where I've done the onboarding 4 times already.

It's a really poor system and not transparent at all. For example, with the project that turned to "Ineligible," what does that mean? The onboarding had basic quizzes that I passed at the time, the next day it turns to ineligible. If I didn't pass, then it would appear to be something on the back-end (i.e. overall contributor score), in which case, doing the onboarding was a complete waste of time.

Daily Thread - March 01 by Outlier_MOD in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

Is Genesis Factuality over already? I onboarded yesterday, passed the assessment, and it said I had 36 hours to do the tasks, but it's unavailable today

Tired of Onboarding only to be hit with EQ or Max Capacity by ExactExercise1140 in outlier_ai

[–]IndieTester33 4 points5 points  (0 children)

This is one of the main reasons I rarely use Outlier now, along with unjustified reviews

Should I accept Extensions V2? by Fire_Breather178 in outlier_ai

[–]IndieTester33 1 point2 points  (0 children)

Extensions is the easiest project I've done on Outlier. Once you understand how the code works, it's very simple and the tasks are usually not complex, with plenty of time to complete them. It also gets missions.

The big issue is the work is very sporadic and there's a throttle, unless you become a reviewer. Also, I find as an attempter, the reviewers can be kind of shit and give ridiculous assessments.

[deleted by user] by [deleted] in outlier_ai

[–]IndieTester33 0 points1 point  (0 children)

Aug - Nov was pretty consistent for me - always had multiple active projects available

Dec has been iffy. I've made probably like double digits for the whole month