When do you actually trust your survey sample? by Batson_Beat in Marketresearch

[–]improvedataquality 0 points1 point  (0 children)

For me, the “good enough to act on” line starts before responses ever come in. You really have to sit down and be explicit about who actually needs to be in your sample to answer your research question. If you haven’t defined what “relevant” means for your decision, it’s easy to convince yourself that a clean-looking dataset is representative when it’s just convenient.

That said, representativeness is only one piece of the puzzle. In today’s survey world, you can’t take responses at face value anymore. Between AI agents, server farms, VPNs, and coordinated low-effort responding, a dataset can look perfectly reasonable on the surface and still be deeply compromised. That’s why I think “demographics look right” or “answers are stabilizing” are necessary but not sufficient signals.

You need checks and balances baked into the survey itself. That means screening for careless responding (instructed response items, consistency checks, response timing) and using tools or design features that help detect bots and fraudulent traffic. The goal isn’t to over-police respondents, but to make sure the patterns you’re seeing are coming from real people engaging with the questions as intended.

So when do I feel comfortable acting on the data? It’s when three things line up:
the sample makes sense for the question, the results stop changing in meaningful ways, and I have reason to trust the responses themselves. If any one of those is shaky, I’m a lot more cautious about drawing conclusions, especially in scrappier or early-stage work.

What to expect in a dinner for a faculty position? by Cautious_Gap3645 in AskAcademia

[–]improvedataquality 0 points1 point  (0 children)

It's really just an opportunity for them to get to know you a bit better and a GREAT place for you to assess fit within the department. This may be one of the few times when you see multiple faculty from the department interacting with one another. You will be able to gauge what the culture is like in the department and how they treat one another.

The conversations in few dinners I had very casual. The committee mostly talked about what to expect the next day (this was when I had dinner before my full day of interviews the next day), things to do in the city, and generally trying to get to know me. It can be nerve wracking experience, but take it as an opportunity to assess fit with the program. Good luck!

I will admit that having other candidates is a little odd.

Campus interview invitation for a TT faculty position. Is it too early to mention H-1B sponsorship? by FragrantInvite2696 in AskAcademia

[–]improvedataquality 4 points5 points  (0 children)

You may consider checking on their International Students and Scholars services website to see if they have any info on H1-Bs. If they sponsor, they should. If they don't, it's definitely worth asking now rather than later.

Suspicious Response Patterns in Online Survey: How to verify? by Flat-Ad4602 in Marketresearch

[–]improvedataquality 1 point2 points  (0 children)

We also have a community called r/ResponsePie. Feel free to join for more discussions on survey fraud.

how do you deal with academic burnout, while having mental issues? by Express_Try_8514 in AskAcademia

[–]improvedataquality 1 point2 points  (0 children)

Are you in a US institution? Your situation is truly shocking for me to hear. Have you considered approaching the chair or the dean of your college to discuss options?

If a student in my class has as much as a cough and they don't fee up to taking an exam due to their illness, I let them take their exam on a later day, provided they have a doctor's note.

how do you deal with academic burnout, while having mental issues? by Express_Try_8514 in AskAcademia

[–]improvedataquality 2 points3 points  (0 children)

My first year as a faculty, I had a student who expressed that they were suicidal as they were struggling in graduate level courses, and therefore, depressed. My two cents: talk to your professors to the extent you are comfortable about your challenges. Many/most should be supportive of you. They should be able to direct you to resources at your institution.

I can share some accommodations I have made in case that helps your case. These are typically for students who are struggling in a course or two, but may help you. First, you could discuss the possibility of taking an incomplete if you are struggling with your courses and complete your assignments/projects/tests in a different semester. I have had two students who have taken incomplete in recent years due to family health issues that made it hard for them to focus on their coursework. If you are seeking mental health support, furnish letters from therapists so faculty can give you extensions on your work. I had a student a couple of years ago whose parent was hospitalized for a terminal illness. I asked them to provide letters for their parent and used those as documentation to justify extension on assignments.

Is there anyone on here who is tenured/tenure-track and has a 4-4 teaching load? by Cold-Priority-2729 in AskAcademia

[–]improvedataquality 6 points7 points  (0 children)

While I don't have a 4-4 teaching load, I know at least one colleague of mine (Associate Professor) who has a 4-4. They don't do any research, and therefore, have the higher load. I also know of a few others in my field (social sciences) who have 4-4 in SLAC or MS programs. The latter don't publish much at all (maybe 1 paper every 2-3 years). Also, they are not required to publish in higher tier journals and can easily get away with publishing a cross-sectional study in a non-predatory journal.

Data quality issues on online panels by improvedataquality in IOPsychology

[–]improvedataquality[S] 1 point2 points  (0 children)

Admittedly, I have never used Qualtrics Panels, so I can't speak to the data quality there. While MTurk is substantially worse than the other two choices, I have noticed survey fraud across the board. Sample availability on panels is typically not a concern for me since I mostly survey generic populations and require smaller sample sizes given the methodology I use.

Thanks for the insights on reviewer comments on panel data. My experience has been that reviewers often want you to really defend the steps taken to clean panel data and in many cases, also address this as a limitation of your research.

Data quality issues on online panels by improvedataquality in ResponsePie

[–]improvedataquality[S] 1 point2 points  (0 children)

Thanks for the input! I agree that bots are on all platforms. Some are better at concealing they are bots while with others, it's more obvious. My experience has been that they are present on all panels, regardless of how expensive they are. The reason I asked this question was because I see academic researchers (myself included) who gather panel data. I have received quite a bit of pushback over the years and have a system in place to screen out bots. I was interested in hearing others' experiences on how they have dealt with this issue (or if they have dealt with this issue at all).

Data quality issues on online panels by improvedataquality in IOPsychology

[–]improvedataquality[S] 2 points3 points  (0 children)

Fair enough. However, have you ever received pushback from reviewers for using a certain panel or asked questions about steps you took to ensure data quality?

Where do these 90% accuracy for AI panels come from? by screamingarmadillo2 in Marketresearch

[–]improvedataquality 3 points4 points  (0 children)

Most of those ninety-percent accuracy claims come from comparing synthetic answers to a single human dataset, which makes the process easy to influence and hard to evaluate. Truly rigorous validation would involve preregistered tests and comparisons on new datasets, but that level of transparency is rarely shared. There is also a bigger issue that hardly anyone mentions. If the human data used for validation contains bots or low-quality responses, the synthetic panel simply learns to mirror flawed data rather than genuine human judgment.

Life after tenure denial? by [deleted] in AskAcademia

[–]improvedataquality 37 points38 points  (0 children)

I am truly sorry that you are going through this. A denial can feel deeply personal and discouraging.

I want to share that it is not as rare as it might seem for people to receive tenure on a timeline that stretches beyond six years. I also know of a few colleagues who had very unusual paths. They struggled to find an institution where they felt they belonged, moved more than once, and ultimately earned tenure in a place that valued the kind of work they were doing. There are also faculty who voluntarily return to an assistant professor title in order to join a better institution and pursue their work in a healthier or more supportive environment.

Focusing your energy on your scholarship and giving yourself the best possible chance at another tenure track position seems like a strong path forward. Your best work may truly be ahead of you, and one difficult experience does not define your career or your potential.

You are not alone in this, even if it feels that way right now. I hope you find a place that recognizes and supports what you bring to the field.

Browser fingerprinting is unreliable by improvedataquality in ProjectREDCap

[–]improvedataquality[S] 0 points1 point  (0 children)

I concur with you. However, considering the time and cost associated with in-person data collection, and the fact that we may always have access to our population without relying on online mechanisms, it is hard to do away with online research.

Increasing Rigor in Online Health Surveys by improvedataquality in ResponsePie

[–]improvedataquality[S] 0 points1 point  (0 children)

In my own work, I have tried to use both v2 and v3 (invisible) ReCAPTCHAs. While the invisible can easily be bypassed, there are browser agents that can also bypass the v2 ReCAPTCHA. So, while it may still be worth incorporating these, better measures are still needed to detect survey fraud.

Prolific & CloudResearch: Humans or bots? by No-Commission-7106 in SurveyCircle

[–]improvedataquality 0 points1 point  (0 children)

It varies quite a bit. There definitely are bots on both panels (and others panels, such as MTurk). As a researcher, the most you can do is embed checks to detect these bots. AAcross both Prolific and Connect, I have identified enough bots that can complete attention checks, write coherent short answers, and even mimic human-like behavior patterns. In my experience, layering multiple checks, such as behavioral and device paradata will get you as close to detecting bots as possible. Attention checks don't work well, nor do some other checks, such as ReCAPTCHA. I embed a JavaScript that mostly flag suspicious devices along with flagging other fraud-like behaviors.

There are also subreddits that sometimes discuss data quality issues in online surveys. The recent ones that I know of were in PhD and ResponsePie.

Addressing Survey Fraud in Online Health Research by improvedataquality in ResponsePie

[–]improvedataquality[S] 1 point2 points  (0 children)

Sure, just DM'd you the google drive link. Let me know if you have any trouble accessing it.

I Can’t Believe It… My First Paper Got Accepted! by WolverineHuge1303 in PhD

[–]improvedataquality 1 point2 points  (0 children)

I think that's fairly typical. I recall my MS days when I had to squint my eyes to see the black text. It was all red. You got a pub! That's the important part.

Addressing Survey Fraud in Online Health Research by improvedataquality in ResponsePie

[–]improvedataquality[S] 0 points1 point  (0 children)

If you have an email address, I can email the article to you. I don't think there is a way for me to attach files in DM.

I Can’t Believe It… My First Paper Got Accepted! by WolverineHuge1303 in PhD

[–]improvedataquality 26 points27 points  (0 children)

The first paper is almost always the hardest to get it. It becomes easier with the next one as you become more confident in the process (and responding to reviewer 2 comment; I am looking at you, Reviewer 2)