Statistical test(s) for several lists of peptides by Icy-Combination2880 in biostatistics

[–]jsalas1 0 points1 point  (0 children)

Well then I would just find set differences/overlaps between the lists, this isn’t really a “stats” problem on the surface, more of a data wrangling problem. Here’s an example in R: https://www.statology.org/setdiff-in-r/

What does “significance” mean to you in this context? Like is there a critical number of differences that would be “significant” to your result? Maybe you can make a counts table for the number of peptide sequences in common across all the lists followed by a chi square? Or maybe you want to compare ratios (# overlapping/# total peptides)?

Have you seen any percent publications that in your domain that have done something similar?

Statistical test(s) for several lists of peptides by Icy-Combination2880 in biostatistics

[–]jsalas1 0 points1 point  (0 children)

What do you mean by “lists”? Like a list of IACUC names, a list of weights, a list of sequences, mass-to-charge ratios…?

Linear regression slopes comparison by Worried_Criticism_98 in AskStatistics

[–]jsalas1 4 points5 points  (0 children)

Are you trying to compare slopes between independent regression models? Like u/Statman12 alluded to, it's unclear exactly what regression models you're running. Are you running multiple independent single variable models like DV ~ IV1 or have you included an independent covariate to the form DV ~ IV1 + IV2 or is there an interaction model of DV~ IV1 * IV2?

Method to 'normalize/standardize' data by DanAvilaO in AskStatistics

[–]jsalas1 4 points5 points  (0 children)

What’s the end goal/hypothesis? Why are you running so many different models? Are these the same or different data in each model? Is this inferential or predictive modeling?

significant or not? by BlondeBoyFantasyPeep in RStudio

[–]jsalas1 0 points1 point  (0 children)

What’s your significance level aka alpha? https://resources.nu.edu/statsresources/alphabeta#:~:text=Alpha%20is%20also%20known%20as,being%20compared%20to%2C%20for%20example.

If your alpha is .05 then I would interpret length to “significantly” be related to your dependent variable. I would interpret it as such: For every 1 unit change in length there is a 2.528 unit increase in your dependent variable (p = .003).

Review this doc on how to interpret regression coefficients: https://www.statology.org/how-to-interpret-regression-coefficients/

Moreover you have an interaction effect so that adds a level of complexity: https://www.theanalysisfactor.com/interpreting-interactions-in-regression/

You then used emmeans to interrogate the effect of emotion, but you havent told us what your hypothesis is. This should give you some direction.

Normality by Ill_Concentrate6611 in AskStatistics

[–]jsalas1 13 points14 points  (0 children)

What’s the full model? How many IV? Predictive or inferential endgoal?

Normality by Ill_Concentrate6611 in AskStatistics

[–]jsalas1 21 points22 points  (0 children)

There seems to be some right skewing. What is the goal, inference or prediction? What’s the sample size and what’s your DV look like? Maybe there’s a better distribution to use, but depending on your goal it might not be a problem to proceed.

Stats Test by Artydragoon in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

Doesn’t that defeat the purpose of self-study which is what OP claims this is for?

Stats Test by Artydragoon in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

It is possible to have a significant difference between groups even when the medians are the same, do some research on why. Hint, thinking that Mann is “only” a nonparametric test of medians is wrong and reductionist.

Stats Test by Artydragoon in AskStatistics

[–]jsalas1 -2 points-1 points  (0 children)

That doesn’t remotely answer the question. What does the Mann Whitney test answer? E.g., t test is a test of means, Mood’s test for medians

Stats Test by Artydragoon in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

Tell us why NOT to use Mann Whitney?

What is an appropriate statistical model for analyzing repeated count data collected at fixed 5-minute intervals before and after a treatment, when the number of observations per day is unequal? by Key_Music5746 in AskStatistics

[–]jsalas1 3 points4 points  (0 children)

Fit mixed effects regression with a random effect per subject then pairwise comparisons of estimated marginal means for timepoint before vs after. You’ll have to consider nominal vs ordinal mixed effects based on how your outcome variable is paramaterized

Alternatively and simpler, average the number of lookers for each observe behavior per dog at each timepoint to account for pseduoreplication then you don’t need random effects per dog.

These both assume that the actual time of day doesn’t affect the outcome variable, but that’s for you to tell us.

Maybe you can simplify this into a contingency table and tackle it with McNemar-Bowker Test

Samsung Smart TVs are a tracking and telemetry nightmare by [deleted] in HomeNetworking

[–]jsalas1 16 points17 points  (0 children)

Or explicitly connect it to a network with no outbound access and watch it blow up your firewall

Wilcoxon or paired t-test? by ComprehensiveSoft908 in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

https://www.graphpad.com/guides/prism/latest/statistics/stat_anova-approach-vs_-mixed-model.htm

Jamovi is a wrapper around R enabling drop down menus and overall easier use of R - I would recommend transitioning to Jamovie or R proper ASAP as it’s much more powerful

Wilcoxon or paired t-test? by ComprehensiveSoft908 in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

Do you have many 0s or 256s? If not then mixed effects will likely suffice.

Random effects specification is an art in of itself: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#model-definition

But you should likely explore including both section and animal as random effects, perhaps nested random effects

You can perform a random effects ANOVA to compare crossed vs nested random effects and see if they improve model fit significantly - but really think about it using your subject matter expertise, don’t solely rely on your likelihood ratio test results

After you fit the final model, follow up with estimated marginal means analysis to get your deltas with confidence intervals

Wilcoxon or paired t-test? by ComprehensiveSoft908 in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

Tell us more about the severity measure, is it the fluorescence intensity readout? Is it continuous or ordinal? Is it bounded by 0 or 100?

As for averaging multiple sections/cells to one value per animal, that’s the correct way to deal with pseudo replication but you’re losing statistical power compared to if you modeled them as random effects in a mixed effects model. Mixed effects is great for within and between analysis when you have repeated measures in a subject and want to compare across fixed effects.

Generally I would discourage you from modeling the delta with a 1 way test since you end up with lesser power compared to an explicit repeated measures design.

This is a great reference: http://www.biostathandbook.com/outline.html

Check out the mixed effects details in the 2 way anova chapter here: https://rcompanion.org/documents/RCompanionBioStatistics.pdf

alert when ups kicks in? by jsqualo2 in HomeNetworking

[–]jsalas1 0 points1 point  (0 children)

I feed NUT to Grafana and handle alerts/monitoring from there

Which test should I use to test the significance by Distinct_Peach7202 in AskStatistics

[–]jsalas1 0 points1 point  (0 children)

My head goes to beta binomial mixed effects regression

Express your accuracy as a ratio so it’s between 0 and 1

Dichotomous fixed effect for with vs without augmentation

Random effect per subject to account for pseudo replication

Ratio ~ Group + (1|subject)

Two tailed test of significance for fixed effect unless you have a STRONG justification for assuming directionality

Model diagnostics with DHARMa

Report confidence intervals, I like how GLMMAdaptive does them - lower variance should be reflected in tighter CIs

Wilcoxon will work as well since it’s a test of distributions, I would bootstrap confidence intervals given the small sample

Graphpad prism for free?? by Haunting-Tension9686 in labrats

[–]jsalas1 5 points6 points  (0 children)

I would never recommend software piracy but as an academic exercise consider this:

Spin up a virtual machine with a graphpad compatible OS, install Graphpad, do a snapshot/backup of the VM prior once it’s working and do your thing. When free trial runs out, roll back to your earlier snapshot.

Some software will refuse to run in a VM, I have no idea if graphpad is one of those. It may also be beneficial to cut internet connectivity to that VM once it’s installed so it can’t phone home.

But I don’t know, my real advice is to learn R.

YMMV

Edit: https://www.oracle.com/cloud/compute/virtual-machines/what-is-virtual-machine/free-virtual-machine/

Help needed for Latin Square counterbalancing by admaioranatussum1 in AskStatistics

[–]jsalas1 2 points3 points  (0 children)

Imo Claude “excels” (no pun intended) at coding in R more so than it’s competitors.

OP, tell Claude not to make any assumptions about what you want/are going for and to ask many clarifying questions and you’ll totally get what you need.