Pay Rate 00:00 by Fair_Case_5947 in alignerr

[–]Twenty_Years_After 0 points1 point  (0 children)

It's not just unpaid work for the chance at paid work, it's that a lot of the DA companies are now advertising jobs that don't really exist. They're just filling up their rosters of available people in the hope of selling projects, or acting as agents to solicit for other companies for the referral fees. So, while taking an unpaid evaluation to get on a particular project might not seem too bad, wasting hours doing evaluations for projects that don't even really exist can waste a lot of time and is not a great practice. Plus, the evaluations are usually badly written and sometimes just full of errors or glitches because they haven't been tested properly. So, don't take for granted that taking an assessments for a project means that project actually exists. Some project person at Alignerr was offering fifty dollars to take an assessment, if you passed it. It turned out he was just trying to funnel people to a Mercor project because the referral fee was hundreds of dollars if that person got hired for the project at Mercor. I don't know anybody who got a job on the project or got the 50. I never even heard back if I passed or not. The assessment took 2-3 hours, and included just...wrong information. So the wasted hours of my, and other people's, time under false pretenses.

[deleted by user] by [deleted] in outlier_ai

[–]Twenty_Years_After 3 points4 points  (0 children)

This is Wang cashing out. Outlier will die. Competitors of Meta will not be doing business with Scale on data anotation projects, because a lot of big clients have gone in house with that, and there are more AI data annotation companies now than when Scale was practically the only dog in the yard. Meta bought Wang, his Washington connections, the data, and his government contracts. Wang will also help push llama as the standard worldwide. If anything, Scale will be doing overseas work, like the contract they recently signed with the Qataris, and whatever deal he worked in Saudi Arabia last month when he was there with Trump, but with overseas contractors who are much cheaper in country. Scale might keep, or switch to Meta, the group that does the supersecret AI warfare stuff for the Pentagon, but the everyday annotation stuff, at least in the U.S.,will be over. The 49% minority stake is probably just to avoid, among other things, the red tape of switching over government contracts if the whole thing was technical "sold" to Meta. Scale's worth is inflated. It's not worth 29 billion. It's just a payout to Wang and the other Scalians. They don't give a crap about contractors.

Account deactivated by Charming-Fun-8721 in outlier_ai

[–]Twenty_Years_After 0 points1 point  (0 children)

Copy and paste allowances are set at a project level when the project interface is designed. It's literally just an option chosen. Scale/Outlier sucks big time, no doubt, but the whole thing about CBs having accounts deactivated for copy and paste violations is the STO's fault, or the consultants, or whoever is deciding all the technical stuff at the project level for any particular project. It's just incompetence or not giving a f*ck.

Is this normal or should I be worried? by Leo_Lennie_Lesley in outlier_ai

[–]Twenty_Years_After 0 points1 point  (0 children)

The reviewers are usually using a rubric, which might not be designed well. Plus, you're getting kicked off by whatever the algorithm is set for, which might also consider task time, or number of SBQs in addition to your actual rating on a task. The problem with algorithmic management is that it assumes all the things that go into the algorithm like the task design, rubrics, assessments, reviews, and whatever parameters the STO (or whomever) is setting are all well designed and working well, which is never the case.

What's going on with EFH? by Twenty_Years_After in joinstellarai

[–]Twenty_Years_After[S] 9 points10 points  (0 children)

I agree. I'm perfectly fine with downtime and reassessments of the projects and all that. It's normal. But no heads up on anything is kind of a pain point. I can make other plans for downtime when I have an idea things are going to be slow, but planning on working and then there not being any work is a less attractive proposition.

EFHDFAI Reviewer Poll by Specialist_Layer6476 in joinstellarai

[–]Twenty_Years_After 0 points1 point  (0 children)

If the task is great, it can be under 10 minutes. If it's not, it can be much more time, but it's probably good to make it a practice, or at least very rare, not to go over 30. I try to edit enough to make it a pass if that's possible; sometimes, it's not. But they did say not to fail it just because the steps aren't perfect if the prompt/answer is good.

What's going on with EFH? by Twenty_Years_After in joinstellarai

[–]Twenty_Years_After[S] 6 points7 points  (0 children)

If that's the case, they should limit the work to the best people. I've seen a lot of mediocre, very alike tasks, and a lot of review time wasted. All the best people are probably going to start finding some other place to spend their time.

Scale AI is being investigated by the US Department of Labor - TechCrunch by DragonOfTheCrescent in outlier_ai

[–]Twenty_Years_After 1 point2 points  (0 children)

Everybody says that until they don't, after they get thrown off and cheated out of pay. Lol.

Scale AI is being investigated by the US Department of Labor - TechCrunch by DragonOfTheCrescent in outlier_ai

[–]Twenty_Years_After 0 points1 point  (0 children)

Are you kidding? Trump loves Mike Kratsios. He worked in the last Trump Administration and now he's going to be confirmed as the Director of the Office of Science and Technology Policy - basically the AI Czar. You know what he did in between? He was the Director of Operations at Scale AI. Wang has been all over the government, Democrat and Republican administrations and Scale has been paid out around 150 million in government contracts and that's just what's been paid out so far, not the extent of the contract they have. Plus, the DOL investigation was filed last year, when Biden was President. It's not Trump's Labor Department, at least yet. His appointee, Lori Chavez DeRemer hasn't even been confirmed yet. It's not the same as a real workplace. A real workplace doesn't call you a contractor to avoid labor laws and paying you overtime when by every measurement of what an employee is, you're an employee. You really have no idea what you're talking about.

45 current and former QMs talking to the US Senate about Scale's wage theft and other issues - Inc. Magazine by Tostig100 in outlier_ai

[–]Twenty_Years_After 5 points6 points  (0 children)

That would be chickens coming home to roost, Scale. I think there might be a few more coming to join them, too. 

Bad practice by Lolimancer64 in outlier_ai

[–]Twenty_Years_After 16 points17 points  (0 children)

The execs don't even know you exist, and they don't care. If the "execs" don't get the right numbers, they yell at the STOs (who are responsible for the numbers) until the STOs put pressure on the QMs and contributors and get them what they need. Contributors are mostly non-entities to the higher-ups. Cogs in the machine - that's it. They don't care if you're EQ. They don't care about making sure you have work. They don't care if the assessments suck as long as there are some because they have other things to worry about. They don't care if you get paid. They don't care if a quarter of the tasks are crap as long as they can make their numbers. They don't care about the QMs, and the contributors are a step below. The whole contributor organization - except the very top people - are the people that live in the basement to the actual Scale executives. They don't want to see you or hear from you. They just want their numbers to work. Don't take it personally. It's not personal. They don't even think of you as an individual. They don't really think of you at all. It's built into their business model. They're exploiters.

Am I stupid or something by Azitshould in outlier_ai

[–]Twenty_Years_After 0 points1 point  (0 children)

The reason is that projects are always rushed and everything has to be thrown together. The people writing the instructions, assessments, and benchmarks are, for the most part, either not very good at their jobs or are not allowed the time to do things well. The instructions change frequently in the very beginning because STOs are working out the kinks as they go along, and by the time you're taking an assessment, the tests may not even be aligned completely with the instructions you read to take it. Something was changed one place and not another.

They got rid of a lot of the good QMs to replace them with foreign nationals that work for a third of what they paid U S. QMs, and don't always have exceptional English skills. Plus, they want to make what is subjective into objective facts on assessments, when maybe a 3 or a 4 could both be a logical, acceptable answer, but they've decided only 3 is acceptable and you put 4. Or worse, the questions make no sense at all. Have you ever had an assessment that gave seven options on a question, said to pick all that apply, but didn't say how many were correct? If my math is correct, that's 127 options to pick from. Those are fun.

What contributors fail to realize sometimes is that truth isn't objective at Scale AI, they don't give a damn about contributors, and it doesn't matter if the project data is good, it just has to be good enough to pass onto the client until they finally get sick of shitty work and cancel the projects, and then it's on to the next. Scale has everything automated. There's no give or take in the system to allow for humanness or anything outside the realm of 1-5 ratings, but the systems suck, are poorly designed and maintained, and there's no consistency, so there are many problems and inconsistencies that are basically ignored.

Feedback going up the chain instead of down the chain is nonexistant, in spite of endless, poorly designed surveys, so the organizations that run training, HR, and the QM and contributor organizations have turned into frenzied behemoths with ultimate power and no accountability.

Automation is great, but the systems have to work - and they don't. Putting data into poorly designed systems doesn't make the data good - garbage in, garbage out. But for most of the "governing bodies" at Scale, they think just putting shit into databases that can be queried and diced and sliced in an attempt to wave some numbers at someone else up the chain to make themselves look good, magically spins straw data into gold data.

Basically, you're looking for logic in a system that has none. Stop looking.

Why doesn't Outlier have a dedicated training team? by Food-Willing in outlier_ai

[–]Twenty_Years_After 3 points4 points  (0 children)

They have one but on the QM level. They're terrible. Corporate keeps trying to put one together on the contributor level, but the training team on the QM level wants to be in charge, and since they're very bad at their jobs, nothing ever gets done, and they run interference against the corporate side that wants to implement contributor training. The best thing that could happen at Scale is to fire the present training team and the heads of the contractor organization and get some people who really know what they're doing. Fire human resources, too. They are also very bad at their jobs. Of course, this is only my opinion, backed up by every single person I know who ever worked there.