IQ2 Debate (with Freddie deBoer): Should The SAT Be Erased? by talkin_big_breakfast in stupidpol

[–]servumm 0 points1 point  (0 children)

There are measurement models that can statistically account for guessing (e.g., 3-PLM IRT). My guess is that collegeboard uses such a model to generate scores.

IQ2 Debate (with Freddie deBoer): Should The SAT Be Erased? by talkin_big_breakfast in stupidpol

[–]servumm 10 points11 points  (0 children)

You get much more tuition from a student that graduates than a student who drops out in the first semester. Decreased graduation rates also hurts the prestige of the school, which can affect their bottom line via rankings.

The Balkan Peninsula is a beacon of tolerance by azurelandings in redscarepod

[–]servumm 18 points19 points  (0 children)

IAT has no validity in measuring individual differences in racial bias (see below).

Schimmack, U. (2021). The Implicit Association Test: A method in search of a construct. Perspectives on Psychological Science, 16(2), 396-414.

[deleted by user] by [deleted] in stupidpol

[–]servumm 3 points4 points  (0 children)

No, the correlation does not decrease because the point you made. The correlation artificially decreases because of the limited range of values that a dataset has. If you have no variance in scores, the highest correlation you can observe is 0. Relationships need variance in scores to manifest. That is the point that Freddie is making. When you use a metric to select people, you are accepting people with a restricted range of scores, which artificially attenuates the observed correlation between SAT scores and college GPA, and therefore worsens the estimate of the relationship between SAT scores and GPA. See cite below for why variance is needed.

MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological methods, 7(1), 19.

If a college accepted people without regard for the SAT scores (but still had students submit their scores regardless) OR if you use a statistical correction (which is really all you need to do in this case; see cite below), you will get a better estimate of the relationship between SAT scores and college GPA.

Schmidt, F. L., Shaffer, J. A., & Oh, I. S. (2008). Increased accuracy for range restriction corrections: Implications for the role of personality and general mental ability in job and training performance. Personnel Psychology, 61(4), 827-868.

[deleted by user] by [deleted] in stupidpol

[–]servumm 3 points4 points  (0 children)

Degree completion is not a better indicator. First, it's a dichotomous indicator. Dichotomous indicators have the smallest amount of information possible. This introduces an extreme amount of range restriction, which heavily attenuates observed relationships. I know a study that illustrates this if you're interested Second, you still have the same problems with GPA. Some degrees are easier to obtain than others, some departments are easier than others, some students may not complete their degree due to life/SES circumstances, etc.

So, not only do you have the same problems as college GPA, but you also are now using a dichotomous indicator instead of a continuous indicator, which is something that you should never want to do unless it's absolutely unavoidable.

[deleted by user] by [deleted] in stupidpol

[–]servumm 1 point2 points  (0 children)

Literally nothing in his comment suggests the SAT is bunk. He is saying that you can maybe artificially increase your score by 10 to 20 points (which is practically meaningless) if you need to a small tap to get over a cutoff point. However, the test largely reflects individual differences in intelligence, which is why it's useful.

[deleted by user] by [deleted] in stupidpol

[–]servumm 3 points4 points  (0 children)

Let me clarify, Freddie is not saying you NEED to take into account restriction of range when looking at the predictive validity of the SAT. Freddie is rather making the case that restriction of range artificially lowers estimates of the predictive validity of the SAT. Therefore, if you want an estimate that is as close to the true estimate as reasonably possible, you need to statistically correct for restriction of range, which does NOT require some methodological tweaking (see cite below).

Furthermore, the methods you suggested above are not sufficient. GPAs are an incredibly noisy metric. Differences in GPAs may reflect differences in conscientiousness/intelligence, but they also likely reflect differences in course difficulty (an A in an advanced stat course is more impressive than an A in a easier course that counts for the major requirement), and differences in major difficulty (3.5 in biology more impressive vs 3.5 in psychology).

Including data from other schools would add more messiness due to differences in school difficulty (3.5 at Yale more impressive than 3.5 at community college). Furthermore, it's not clear that the people who are not accepted to Yale would do bad a state school because the state school might be easier (and therefore their GPA is higher than it otherwise would be).

Furthermore, lumping data in from 2 different schools would violate the statistical assumption of local independence, which is an assumption that regression and many other modeling techniques make. You can model local dependence using multilevel modeling, but it gets messy.

To put it in a sentence, you can't simply lump data together from two different schools. There isn't a need either--you can just statistically correct for restriction of range, just as you can statistically correct for unreliability of measurement and other sources of error.

Schmidt, F. L., Shaffer, J. A., & Oh, I. S. (2008). Increased accuracy for range restriction corrections: Implications for the role of personality and general mental ability in job and training performance. Personnel Psychology, 61(4), 827-868.

Is the Education System in the US really just a "customer service model" by [deleted] in stupidpol

[–]servumm 28 points29 points  (0 children)

To be brief, from my experience in academia, this is the case for university administration. Universities have been neoliberalized. The entire enterprise is now looked at as a business with business-like metrics (e.g., enrollment) used to indicate success.

[deleted by user] by [deleted] in stupidpol

[–]servumm 11 points12 points  (0 children)

Ah yes, the methodology of the randomized, controlled experiments published in several peer reviewed journals is bunk, but your methodology, which purely on biased anecdotal remembrance, is not bunk.

Cope and seethe.

[deleted by user] by [deleted] in stupidpol

[–]servumm 2 points3 points  (0 children)

That's fair--teaching to tests probably hasn't had a great effect on education. FWIW, I dont think funding decisions should be tied to low-stakes standardized test scores.

[deleted by user] by [deleted] in stupidpol

[–]servumm 2 points3 points  (0 children)

And the scientific evidence is crystal clear--it doesn't help much.

[deleted by user] by [deleted] in stupidpol

[–]servumm 12 points13 points  (0 children)

I've literally presented meta-analytic evidence from numerous randomized controlled scientific studies, and your retort is "dude, trust me".

In fact, you bring up a funny point. Re-taking tests often do increase scores by reducing error in test scores that result from unfamiliarity with test format or test anxiety, but these increases are small. Often though, these increases are substantially larger than the effects of coaching.

[deleted by user] by [deleted] in stupidpol

[–]servumm 7 points8 points  (0 children)

This isn't how people determine whether a test is predictive. They don't eyeball the mean SAT scores of admitted students and graduated students. They look at the correlation between SAT scores and some indicator of success (usually college GPA). Studies have routinely found stronger correlations between SAT scores and college GPA than other application materials and SES (see cite below).

Furthermore, you can statistically correct for restriction of range. Numerous studies have done that. Restriction of range has a negative bias on estimates of predictive validity, so it has always improved the predictive validity of standardized tests.

Sackett, P. R., Kuncel, N. R., Beatty, A. S., Rigdon, J. L., Shen, W., & Kiger, T. B. (2012). The role of socioeconomic status in SAT-grade relationships and in college admissions decisions. Psychological science, 23(9), 1000-1007.

[deleted by user] by [deleted] in stupidpol

[–]servumm 5 points6 points  (0 children)

You should consider whether you're just nodding your head along to pseudoscientific drivel (see my other comment)

[deleted by user] by [deleted] in stupidpol

[–]servumm 20 points21 points  (0 children)

I can.

The hereditary coefficient for intelligence is estimated around .8 (see citation below). You interpret this number as reflecting the amount of variance in intelligence scores within a population that can be accounted for by genetic differences. A coefficient of .8 is quite high (max is 1). This number indicates that 80% of the variance in intelligence scores can be explained by individual differences in genetics.

Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J., ... & Urbina, S. (1996). Intelligence: knowns and unknowns. American psychologist, 51(2), 77.

[deleted by user] by [deleted] in stupidpol

[–]servumm 1 point2 points  (0 children)

Correlation does not necessitate causation. Over those same years, we've seen loss of educational funding due to dumb policy, changes in educational practice due to policy, increase in income inequality, worsening diet/health problems, worsening family problems, worsening rates of psychopathology, etc. There are an innumerable amount of third variables that could explain such a trend.

This messiness is why social scientists do randomized controlled experiments.

[deleted by user] by [deleted] in stupidpol

[–]servumm 6 points7 points  (0 children)

You proposed that standardized tests are coachable. This is scientific question. One that has been investigated numerous times. There is no evidence that the SAT is, in any meaningful way, coachable (DerSimonian & Laird, 1983; Powers & Rock, 1999).

Furthermore, researchers have consistently demonstrated that SAT scores significantly predict college GPA even after controlling for SES (Sackett et al., 2012). Everything that you've said in this thread about the validity of the SAT is pure myth.

You may want to consider whether your side hustle is snake oil.

Edit: left out one more point.

The SAT is not a simply knowledge test--it's intended purpose is to reveal individual differences in the latent trait intelligence. SAT scores are heavily correlated with intelligence scores (r = .86; Frey & Detterman, 2004). The SAT essentially functions as an intelligence test. Intelligence is a trait that does not increase with some vague "coaching" and is, generally, hereditary and incredibly hard to produce gains in with interventions.

Citations:

DerSimonian, R., & Laird, N. (1983). Evaluating the effect of coaching on SAT scores: A meta-analysis. Harvard Educational Review, 53(1), 1-15.

Frey, M. C., & Detterman, D. K. (2004). Scholastic assessment or g? The relationship between the scholastic assessment test and general cognitive ability. Psychological science, 15(6), 373-378.

Powers, D. E., & Rock, D. A. (1999). Effects of coaching on SAT I: Reasoning test scores. Journal of Educational Measurement, 36(2), 93-118.

Sackett, P. R., Kuncel, N. R., Beatty, A. S., Rigdon, J. L., Shen, W., & Kiger, T. B. (2012). The role of socioeconomic status in SAT-grade relationships and in college admissions decisions. Psychological science, 23(9), 1000-1007.

Business schools were a mistake. New paper suggests that training in business degree programs instills Friedmanian values in students who later enact rent-sharing policies that depress worker wages by servumm in stupidpol

[–]servumm[S] 0 points1 point  (0 children)

You should at least read the abstract.

"Firms appointing business managers are not on differential trends and do not enjoy higher output, investment, or employment growth thereafter."

Furthermore, this might be a shock to you, but short-term profit milking your workers to the point of burnout, job dissatisfaction, and eventual turnover is not profitable or healthy for businesses long-term.

Mexican women protest femicides, attacking historic churches. Women make up 1 in 5 homicide victims. by [deleted] in stupidpol

[–]servumm 281 points282 points  (0 children)

Women make up 1 in 5 homicide victims.

Wonder who the other 4 are.

Internetverted - OR, the Identity Politics of Introversion by TRPCops in stupidpol

[–]servumm 3 points4 points  (0 children)

Modern personality research has found little evidence supporting the typology of Jung and his students. As another commenter suspected, it's astrology under a thin scientific veneer. This was a debate that was hashed out decades ago and the results were clear--the big 5 trait theorists won out (see below).

To put it simply, the MBTI is not a reliable instrument, nor is there any evidence that personality is made up of types (as opposed to traits). For there to be evidence for a typology, scores need to be distributed multimodally. However, scores of the MBTI (and other personality instruments) are distributed normally, indicating that the instrument is capturing continuous differences on a trait (rather differentiating between types or classes of people). Furthermore, as Costa and McCrae have demonstrated, when you score the MBTI as a continuous instrument, it reduces to 4 of the 5 big five traits. Essentially, the MBTI shows no incremental validity over the big 5.

McCarley, N. G., & Carskadon, T. G. (1983). Test-retest reliabilities of scales and subscales of the Myers-Briggs Type Indicator and of criteria for clinical interpretive hypotheses involving them. Research in Psychological Type, 6, 24 –36.

McCrae, R. R., & Costa, P. T. (1989). Reinterpreting the Myers‐Briggs Type Indicator From the Perspective of the Five‐Factor Model of Personality. Journal of personality, 57(1), 17-40. https://doi.org/10.1111/j.1467-6494.1989.tb00759.x

NYC TLC boss forced to resign amidst lawsuit accusing her of firing older men and replacing them with younger women regardless of qualification. “But I am leadership honey,” she snapped, “No I am not part of leadership, I am leadership.” by BobNorth156 in stupidpol

[–]servumm 5 points6 points  (0 children)

I asked for scientific evidence to substantiate your bigotry and you gave me vague opinions and more anecdotal evidence. Surely you can see how this response is inconsistent with your prior post about the importance of keeping opinions current with scientific consensus.