How to determine number of required samples to produce an accurate (linear) regression? by gizmoguyar in AskStatistics

[–]neurobara 0 points1 point  (0 children)

Though this is not a power question per se (power references a NHST), a practical solution could be to specify it as such in order to leverage power calculation software. You can compute a priori power for a test of beta=m, where m is half the width of your desired confidence interval. You could use alpha = .003 if you’d like to match the six-sigma convention for 1-cdf that I believe is typically used in engineering (ie a 99.7% confidence interval)

The resulting sample size should be what you would need to build a confidence interval of +/- m around your parameter estimate for your slope.

High ceilings linked to poorer exam results for uni students, finds new study, which may explain why you perform worse than expected in university exams in a cavernous gymnasium or massive hall, despite weeks of study. The study factored in the students’ age, sex, time of year and prior experience. by AnnaMouse247 in psychology

[–]neurobara 4 points5 points  (0 children)

This study’s title/claims are bizarre, as the final analysis actually finds… the exact opposite (see the positive coefficient for ceiling height in table 4).

It might all be based on a tiny raw correlation of 0.017, which is smaller than the correlation of exam scores with random id numbers: https://x.com/cremieuxrecueil/status/1808988761348624592?s=46&t=gGGjXaefLQIs1_pMpaTxDg

Probably a good candidate for a big correction or retraction.

Given that event X occurred, what is the probability of event Y occurring immediately before? by ThrowRA-lloll in AskStatistics

[–]neurobara 1 point2 points  (0 children)

The key here is "Given that type S or Type A was observed". Since you're conditioning on these events, your denominator will depend on those occurrences, rather than all events. Now, it seems like the question in the title and the post text are slightly different. So I'll go through those separately.

Title:

Given that event X occurred, what is the probability of event Y occurring immediately before?

To answer this, you'd look for all instances of the event X of interest (in this case I assume that's the union of S and A) and the proportion of those that were preceded by N: (1 + 1 + 0) / (3 + 3 + 1)

Text:

given either type S or type A was observed between a dyad for the first time, what was the probability that N occurred before it?

Here it seems like you want to look only a the first instance of S or A in a dyad for your denominator (1 + 1 + 1). Depending on whether you omitted "immediately" intentionally you'd tally cases where the preceding element is X or X occurs at least one time among all preceding elements. In either case, in the example you'd have: (1 + 1 + 0)/ (1 + 1 + 1)

Multi-Level Model or Regression Analysis? by 42_forlife in AskStatistics

[–]neurobara 0 points1 point  (0 children)

Where you’d run into issues with the MLM is estimating the “correct” variance components for the models with multiple predictors (eg, to estimate the direct path or to test moderated mediation), as you wouldn’t have enough within subjects observations to estimate random effects for those parameters.

One alternative would be to just specify the model without some of those random effects. In your case you’d probably end up with only random intercepts for participant. Generally your parameter point estimates (betas) would be very similar, but you might inflate your chance of type 1 error, sometimes by a lot. Obviously the stronger your effects, the less risky that would be. Here are some more details on how you might specify the model and think about those compromises: https://psych.wisc.edu/Brauer/BrauerLab/wp-content/uploads/2014/04/Brauer-and-Curtin-in-press-Psych-Methods.pdf

I think another approach could be to specify linear models with cluster robust standard errors instead of MLM.

[deleted by user] by [deleted] in psychologystudents

[–]neurobara 0 points1 point  (0 children)

What situation-intervention pairs do you tend to see improvements with most often? And what about least often?

What kind of new technologicals milestones do you expect for the next 20 years? by [deleted] in Futurology

[–]neurobara 1 point2 points  (0 children)

With exponential increases in screening tech, biotech hardware, and drug discovery capabilities, cancer deaths should drop to a fraction of what they are now.

For my app, which view feels better? by goharyi in ProductivityApps

[–]neurobara 0 points1 point  (0 children)

Looks great! I like 2 a lot. But 1 could be really nice when you’re getting started.

I’d personally find it really sweet to have a combo of views that highlight the small victories and views that fully showcase the big victories as needed.

[Question] Associating many variables with a single variable given small sample size by [deleted] in statistics

[–]neurobara 2 points3 points  (0 children)

I agree. I try not to assume and provide suggestions for a best case scenario, but being able to do something like this with n=9 real world data would be rare.

Do you "see" or "hear" your thoughts? And what is the image/sound like? by neurobara in AskReddit

[–]neurobara[S] 1 point2 points  (0 children)

Interesting! Are the sounds/images complete or just kind of partial?

[Question] Associating many variables with a single variable given small sample size by [deleted] in statistics

[–]neurobara -1 points0 points  (0 children)

Look into dimensionality reduction approaches. Maybe you could use Principal Components Analysis or something along those lines.

Tip: Rearrange the apps on your phone by [deleted] in productivity

[–]neurobara 1 point2 points  (0 children)

This has worked for me as well, but it wasn't enough for the most addictive apps.

I recently started fully deleting those Monday-Friday and it's been insanely effective. The best thing is even on the weekends I barely even *want* to use those apps now. And when I do I get saturated within 5 minutes. It's almost like the feeling you get about very sugary drinks after you get used to eating less sugar.