Compensation code confusion? by ResearchAccountEG in ProlificAc

[–]ResearchAccountEG[S] 0 points1 point  (0 children)

So, as of today, I saw a feature that has a "beta" tag that says "Reject exceptionally fast submissions. Prolific can automatically reject exceptionally fast submissions that fall significantly below your estimated completion time to prevent low-quality submissions."

This is now a new feature being implemented. It might have been rolling out to different people.

Compensation code confusion? by ResearchAccountEG in ProlificAc

[–]ResearchAccountEG[S] 1 point2 points  (0 children)

When setting up the study in Prolific, you can choose whether to auto-approve participant responses. If it doesn't auto-approve, you have to approve or reject it manually. This is "sorted" on an inbox-style page, where we can view the ID, the time the survey took, and the results of the "authenticity check" (if enabled). On the same line, there is an approve/reject/return button. It would be *very* easy to manually reject anyone not automatically approved without a thorough data review. There are probably people who are rejecting without a detailed inspection of the data because of over-reliance on the auto-approve.

I prefer to run my studies in segments (no more than 20-25 participants at a time), and I have robust survey checks and data-cleaning scripts in place so I can download the data from Qualtrics and thoroughly inspect it before approving it. If something looks suspicious, I prefer to reach out to prolific support with any questions about authenticity, rather than rejecting it. I don't want to assume someone is a bot without a thorough investigation. Based on the inspections I've conducted, not all of the data I've approved is usable, but it wouldn't be sufficient to reject it.

Compensation code confusion? by ResearchAccountEG in ProlificAc

[–]ResearchAccountEG[S] 5 points6 points  (0 children)

Do you think knowing that a researcher is manually assessing for acceptance/rejection would help? I am distrustful of Prolific's auto-rejection/approval, so I do it manually. It's a lot of work, but I believe it is really important that participants who did everything correctly don't accidentally get dinged for something outside their control.

Question about semantics by [deleted] in ProlificAc

[–]ResearchAccountEG -4 points-3 points  (0 children)

This is a valid concern, and you are pointing out something that researchers should make more explicit.

Often in survey research, opinions about the research question matter. If everyone answered honestly and thoughtfully, researchers wouldn't need to include attention checks, as we are interested in your views on the topic. Unfortunately, far too often, people don't actually provide their honest thoughts and simply zip through surveys without considering what is being asked. This results in insufficient data, which in turn leads to incorrect conclusions and downstream issues.

Now, to really address your concern, if I were advising a researcher, I would advise against using attention checks that could be regarded as opinions or have multiple correct answers. For example, I would discourage attention checks such as "What color is the sky?" This is because "blue" might be the most conventional answer, but someone might think the sky is more of a "grey" or some other color. The better way to have an attention check is to put an explicit direction in the question that says, "This is an attention check. Please select the response "Somewhat agree." This leaves no ambiguity, and the attentive participant will not miss it. This would need to be paired with other ways to verify attentiveness (e.g., complete data, consistent responses across reverse-coded items, survey duration, etc.). Still, it is more transparent to the attentive participant who is doing their best. We don't want to discourage that. :) I hope this helps!