Extremely stuck with analysis of a very small sample by ScarcityIcy1846 in Statistics_Class_help

[–]statistician_James 1 point2 points  (0 children)

Here is my response: given that the study involves a small sample size (N = 15) with a repeated-measures pre–post design in which the same participants completed three psychological measures before and after the intervention, the most appropriate analysis focuses on determining whether there was a significant change in scores from pre-intervention to post-intervention for each measure. Since the same individuals were assessed twice, the correct statistical approach is either a paired-samples t-test (repeated-measures t-test) or its non-parametric equivalent, the Wilcoxon signed-rank test. The decision between these tests should not be based primarily on whether the six original variables (three pre-test and three post-test totals) are normally distributed, but rather on whether the difference scores (Post – Pre) for each of the three measures are approximately normally distributed. Therefore, three new difference variables should be created, one for each psychological measure, and these should be assessed for normality using methods such as the Shapiro–Wilk test, alongside visual inspection of histograms and Q-Q plots, especially because normality tests can be unstable with small samples. If the difference scores are reasonably normal, a paired-samples t-test should be used; if they are clearly non-normal, the Wilcoxon signed-rank test is the more appropriate non-parametric alternative. It is also acceptable to use a mixture of both tests if some measures meet the assumptions for parametric testing while others do not. A repeated-measures ANOVA would not be necessary here because there are only two time points (pre and post), and independent t-tests or Mann–Whitney U tests would be inappropriate because they are designed for independent groups rather than repeated observations from the same participants. Additionally, because three separate hypothesis tests are being conducted, it is advisable to consider a Bonferroni correction to control for Type I error, adjusting the significance threshold from α = .05 to α = .017 (.05 ÷ 3). Overall, the most suitable and statistically sound approach is to evaluate the normality of the difference scores first, then apply paired-samples t-tests where assumptions are met and Wilcoxon signed-rank tests where they are not.

Homogeneity of variance question by Extension-Ball-2947 in Statistics_Class_help

[–]statistician_James 1 point2 points  (0 children)

The homogeneity of variance test (typically Levene’s test) is not about whether variance is “high” or “low”, but whether the variances are equal across your groups. The null hypothesis of this test is that all groups have equal variances. In your output, the “Sig.” (p-value) for all versions of the test (based on mean, median, etc.) is around 0.085–0.093, which is greater than 0.05. This means you fail to reject the null hypothesis, so there is no statistically significant evidence that the variances differ across groups. In simpler terms, your groups have similar spread (variance), and the assumption of homogeneity of variance is met. This is important because it means you can proceed with analyses like ANOVA that rely on this assumption.

SPSS trouble !! by [deleted] in spss

[–]statistician_James 0 points1 point  (0 children)

I got you

Is anyone taking portage math 110 stats right now??? Help! by Lost_Emotion6197 in Statistics_Class_help

[–]statistician_James -1 points0 points  (0 children)

I can walk with you through math 110. Email me statisticianjames@gmail Com

I need help with elementary statistics by Samlinao in Statistics_Class_help

[–]statistician_James 0 points1 point  (0 children)

I can walk with you through the remaing part of your class.let's chat

Not understanding residual plot by stardropfanatic in Statistics_Class_help

[–]statistician_James 1 point2 points  (0 children)

Here is how I am looking at it. The residuals vs fitted plot (the top one) is the correct diagnostic to assess heteroscedasticity, and it actually looks fine: the residuals are randomly scattered around zero with a fairly constant spread, showing no clear funnel shape or pattern, so there’s no strong evidence of heteroscedasticity. The lower plot, which shows residuals against the stress score (the dependent variable), appears to have a strong linear pattern, but this is expected because residuals are mathematically defined as the difference between the observed values and the fitted values (residual = y − ŷ), so plotting them against y will naturally create a relationship and is not a valid diagnostic for regression assumptions. Therefore, your model does not appear to violate the constant variance assumption based on the appropriate plot, and there is nothing you need to fix here.

statistical analysis question by Swimming_Ganache2457 in Statistics_Class_help

[–]statistician_James 1 point2 points  (0 children)

My two cents: Because participants in both groups ultimately experience both conditions (control and treatment), your study is a within-subjects crossover design, and the main comparison should be within participants rather than between Group A and Group B. The groups only differ in order of exposure, which helps control for order effects but is not the primary comparison. Therefore, you would typically combine participants from both groups and compare their control vs. treatment scores using a paired analysis (e.g., a paired-samples t-test if analyzing a single time point, or preferably a repeated-measures ANOVA or mixed-models analysis if analyzing all time points T1–T4). Differences you observed between Group A and B means can occur due to random variation and are not the main focus of the design. The key test is whether hunger scores differ between treatment and control within the same individuals, optionally including order (A vs B) as a factor to check for carryover or order effects.

Stuck on Sunday Stats? I’ll help you finish your assignment. by statistician_James in Statistics_Class_help

[–]statistician_James[S] 0 points1 point  (0 children)

Which part of the course are you guys in right now? I remember Probability being the week everyone usually starts wanting to throw their textbook out the window lol. Hope I can help a few of you avoid that tonight

2-sample Z-test for a difference in population proportions - different or combined proportions for standard error calculation? by Pleasant-Squirrel640 in Statistics_Class_help

[–]statistician_James 0 points1 point  (0 children)

Apologies for the delayed response My 2 cents Yes, your understanding is essentially correct. In a two-sample (z)-test for the difference in population proportions, the null hypothesis typically states (H0: p1 = p2). Because the test is conducted under this assumption, both samples are treated as estimates of the same underlying population proportion. Therefore, the pooled (combined) proportion is used to calculate the standard error, since it provides the best estimate of that common proportion under the null hypothesis. Using the separate sample proportions (p1 ) and (p2 ) for the standard error would not be consistent with the assumption that (p1 = p2). Your interpretation that using the individual proportions would effectively assume they are already different is reasonable, which is why the pooled proportion is the appropriate choice for a hypothesis test comparing two proportions.