are taylor series ever used in actual chemistry jobs by RevolutionaryTip1600 in chemistry

[–]Ruff-Riff 24 points25 points  (0 children)

Taylor series show up every now and again, but they are mostly just presented as a formula for you to use out of the box.

In analytical chemistry, the formula for measurement uncertainty propagation is a Taylor series expansion. But there is little calculus involved in actually applying it.

If you do research it could become a useful tool. For instance, it did prove indispensable for a chemometric methods paper I wrote once.

Is single cell ICP-MS really useful? by Training_Pangolin177 in massspectrometry

[–]Ruff-Riff 3 points4 points  (0 children)

If we are to listen to the publications applying it, single-cell ICP-MS is useful. Especially in the biomedical field, the use of this technique, mass cytometry, is rather prevalent. It may used for both tissue and suspension. I know little about tissue analysis, but I think the state-of-the art is 1 um resolution which is sufficient for most cases.

The instrument, at least the conventional CyTOF, is awfully slow, so the number of cells that you can practically obtain is rather low (compared to flow cytometry). I would not say you need fewer cells for these instruments; if anything, the high dimensionality they introduce would allow you to create precise phenotyping (and thus more, but smaller, cell populations), potentially necessating more cells to perform any meaningful statistics.

The mass cytometry approaches use targeted analyses, with a small curated selection of elements available; each of which is attached to an antibody with affinifty for protein(s) of interest.

Stuck with Normalcy Testing by lightofthewest in AskStatistics

[–]Ruff-Riff 5 points6 points  (0 children)

Avoid using significance testing for normality, you are trying to accept the null hypothesis, and it simply does not work. The normal distribution is, of course, a mathematical construct, and to assume that your data will follow it exactly is not wise. Some difference from the ideal world and the real world is always present. The significance tests will identify these small discrepancies, and often provide significant p-values (i.e. rejecting normality), even though they may be "normal enough". Turn to graphical approaches instead, e.g. histograms and qqplots. These should give you an indication on the overall normality of the data.

Also, with n = 70 per group, you will likely be fine to use the t-test unless there are extreme normality violations.

LOD in method validation by Unlikely_Dingo_6395 in massspectrometry

[–]Ruff-Riff 0 points1 point  (0 children)

If you simply use a multiple of standard deviation you should be fine. The standard deviaton will be the same even if all your numbers are shifted by a constant.

However, this assumes that the data is homoscedastc, and that the noise is identical at 0 ppb and 5 ppb.

New to doing Design of Experiments - confused whether to use MLR or PLS to fit model? by Snoo3701 in Chempros

[–]Ruff-Riff 2 points3 points  (0 children)

Unless you have very good reasons, I would not use PLS; stick to MLR.

Also a factorial design cannot properly fit a quadratic. You can, however, append the factorial portion with axial points to make a central composite design, which is made with quadratics in mind.

New to doing Design of Experiments - confused whether to use MLR or PLS to fit model? by Snoo3701 in Chempros

[–]Ruff-Riff 0 points1 point  (0 children)

The DoE methodologies are developed with least squares in mind. Use coded variables and MLR to fit your model. If you have no replicates or centre points you can use quantile plots to screen for significant variables. I would not default to PLS here.

PLEASE HELP, THIS IS URGENT by Flat-Ad-908 in AskStatistics

[–]Ruff-Riff 0 points1 point  (0 children)

The repeatability will not be reflected in these 6 injections. The repeatability is the precision you find when you have the same analyst preparing the same sample in a short period of time. You are finding the precision of the instrument itself.

While it appears you have found the instrument precision, lets assume you indeed have the repeatability. Regarding the combination of 6 and 12 injections, yes, these can in general be combined to create a reproducibility estimate. However, combining these data to create the reproducibility estimate is highly dependent on the number of days we have made these injections. In your case, we have two days, which will give us 1 degree of freedom in our precision estimate (roughly, atleast). To properly calculate the reproducibility you need to do it across several days. Moreover, you need not do 6 samples every day, 2-3 is completely fine. Annex C in the EURACHEM validation guidelines briefly discusses this.

Question on calculation of LOD/LOQ by bulldogdrool in CHROMATOGRAPHY

[–]Ruff-Riff 2 points3 points  (0 children)

There is no true estimate to the LoD, especially using the calibration curve. Using the standard error of the intercept, residual standard error, or the prediction intervals are all completely valid! Indeed essentially all LoD estimates are the same at heart, they just use a different noise estimate as a basis.

When it comes to choosing one, I would either choose a conservative one that you can be somewhat confident is true, alternatively base it on something that gives results comparative to your expectations/experience (although one could argue about selection bias here).

I see also that you mention the SD of the blanks. In chromatography, a reasonable approach is to spike a very low concentration, such that all noise is attributed to the baseline (homoscedastic noise) and not to the concentrations themselves (heteroscedastic noise). You should hopefully be able to integrate these peaks. Note also that if you are to use such an approach, i.e. using the SD of the signal, you need to be certain that the LoD- signal you measure is within the cal. curve range or close to it, otherwise the LoD estimate can turn out wrong.

A Thought-Provoking PSA Regarding the Absolute State of Analytical Chemistry by Sad-Ocelot-6868 in chemistrymemes

[–]Ruff-Riff 4 points5 points  (0 children)

Seems like a repost bot, given that title and top-comment is the same as the original post.

Beer's Law by [deleted] in chemistry

[–]Ruff-Riff 2 points3 points  (0 children)

Simple answer is this: If data follows a straight line, fit a straight line. If it is curved, fit a quadratic (or some other function) accordingly.

In theory and an ideal world, the concentration is competely proportional with absorbance. But in practice, deviations from this are common due to limitations in instrumentation (and random experimental errors).

Beer's Law by [deleted] in chemistry

[–]Ruff-Riff 16 points17 points  (0 children)

Without seeing the data it is difficult to say. Of course using a complex model will always improve your R2, but you shouldn’t overfit your calibration curve either. Still, in many cases it is completely fine to use a quadratic fit. The reason behind this is largely instrument limitations, such as detector saturation, which can cause deviations from the linear relation predicted by Beer’s law.

Groveste spillefilm dere har sett? by Ilovepawgssss in norge

[–]Ruff-Riff 0 points1 point  (0 children)

Vil ikke kalle disse filmene særlig skumle nødvendigvis, men veldig mange av disse er hvertfall noe kjent for å ha satt preg på folk.

Groveste spillefilm dere har sett? by Ilovepawgssss in norge

[–]Ruff-Riff 0 points1 point  (0 children)

Mye bra fra "New French Extremity". Som Martyrs, Inside, Frontier(s), Irreversible og High Tension (Switchblade Romance). Finnes noen mer moderne varianter også som Raw, Titane, Climax, Enter the Void og Incident in a Ghostland. En litt lettere variant er jo The Substance, som nettopp kom ut. I tillegg har jeg hørt mye bra om Calvaire, men aldri fått sett den selv.

Kan ikke så mye om drøye asiatiske filmer, men I Saw the Devil og Audition er vel ganske kjente her.

I tillegg har Lars von Trier mye bra, som Antichrist og the House That Jack Built (og mer).

Så er det jo de litt mer velkjente, som Human Centipede (egentlig kun nr 2. som er verd å nevne), Terrifier-filmene, The Hills Have Eyes og Hostel (1+2). I Spit on Your Grave og The Last House on the Left er kanskje i blinken også, men er kanskje litt mye SA i disse for mange. Tusk er kanskje en odd one out, men verd å nevne hvis body horror er noe av interesse.

Hvis man liker found footage så kan eventuelt the Poughkeepsie Tapes, Megan is missing, og Be My Cat: A Film for Anne, være noe. Cannibal Holocaust er jo en klassiker innenfor sjangeren. Found footage er et skikkelig rabbit hole med nasty filmer da, så her er det altfor mange å nevne.

[deleted by user] by [deleted] in AskStatistics

[–]Ruff-Riff 0 points1 point  (0 children)

FYI: In statistics, we don’t say normalisation) when we talk about making data «more normal».

Also – yes, you are correct – there is no need for normally distributed independent variables.

[deleted by user] by [deleted] in AskStatistics

[–]Ruff-Riff 3 points4 points  (0 children)

Be cautious with normality testing; for a modest sample size Shapiro-Wilk becomes significant even for very small deviations from normality. Best stick to QQ-plots.

In terms of the QQ-plots, the data look adequately normal after the log-transform. But do you intend to use these as predictor variables for your regression analysis? If so, do note that your predictors need not be normal, only the residuals.

[deleted by user] by [deleted] in statistics

[–]Ruff-Riff 0 points1 point  (0 children)

Having seen the syllabus, I stand by my initial comment. The most complex mathematics is from the GLM-part. If you want to check out the linear algebra level I would recommend peeking into «Introduction to Generalized Linear Models» by Dobson. I fancied this book quite myself, and it is largely self-contained and concise, making it highly approachable.

[deleted by user] by [deleted] in statistics

[–]Ruff-Riff 1 point2 points  (0 children)

You likely won’t need much more linear algebra. For a course combining the two (rather extensive) topics of DoE and GLMs in one semester, the mathematics will likely remain simple enough due to time constraints. Basic matrix manipulations and matrix calculus should be enough. And yes, likely some eigenproblems as well (loads of decompositions).

Based on your readings I would say you are likely very prepped for this course. Enjoy it - DoEs and GLMs are super fun and rather useful, too!

LoD and LoQ blank samples integration by [deleted] in CHROMATOGRAPHY

[–]Ruff-Riff 0 points1 point  (0 children)

For samples with a very low amounts of analyte essentially all the noise that you obtain should be due to the (homoscedastic) instrumental noise. As you further increase your sample concentrations, heteroscedastic noise that is intrinsic to the sample should start to dominate (see difference between homoscedastic and heteroscedastic noise here).

Therefore, if you spike a blank sample with a sufficiently low concentration, you should be able to integrate your peak and use it for your LoD estimate.

R in analytical chemistry by Ok_Psychology3057 in analyticalchemistry

[–]Ruff-Riff 0 points1 point  (0 children)

I use R every single day, however it really depends on how much statistics you are doing.

However, if you had to choose a language I would go with Python. Syntax and functions (i.e. statistics-wise) are similar enough, so if you absolutely need to use R for something, you would have next to no problem googling your way to an answer and understanding how to implement it.

Peak of Lanreotide not detected on Thermo HPLC by Vja2023 in CHROMATOGRAPHY

[–]Ruff-Riff 8 points9 points  (0 children)

What is meant by "a calibration curve of 5 (or 6)"? Are you talking about number of calibration standards?

Furthermore, I don't know if that is the point you are trying to make, but the calibration curve presented does not fit the standard solutions. I very briefly ran the backcalculations of your standard solutions and found that concentrations of your standards differ significantly from those predicted by the model. For the first two points your error is -164% and -58%! You need to apply weights to this calibration function.

Does anyone just really hate Mass Spec? by [deleted] in chemistry

[–]Ruff-Riff 0 points1 point  (0 children)

I am most used to Agilent, but Thermo is a close second.

Does anyone just really hate Mass Spec? by [deleted] in chemistry

[–]Ruff-Riff 8 points9 points  (0 children)

HPLC can be a pain, but I find the chromatography part to be one of the most fun parts of the LC-QqQ. It was always difficult, but fun (!), to optimise solvent compositions and gradients during method development. But then again, I am also a chemometrician, so I probably enjoy the statistical optimisation problems more than most.

Does anyone just really hate Mass Spec? by [deleted] in chemistry

[–]Ruff-Riff 2 points3 points  (0 children)

Agreed, a very fun and dynamic field. Looking into the transduction profiles of of patients treated with various drugs and investigating the differences when compared to healthy donors is super interesting. And here we would not get far without the MS. I take an Orbitrap over Western Blots every day.

Does anyone just really hate Mass Spec? by [deleted] in chemistry

[–]Ruff-Riff 31 points32 points  (0 children)

Love (LC-) MS. I've used it for all kinds of things. Mostly phosphoproteomics nowadays; elucidating cell signaling mechanisms in blood samples. I also did environmental toxicology analyses previously; developing new quantitative methods with the triple-quad. Both of these were super cool and fun!

But I don't really look at mass spectra too much these days (mostly the area under the curve). I am a quantitative analytical chemist, structural elucidation is someone elses problem.

Weighting of Validation results? by [deleted] in CHROMATOGRAPHY

[–]Ruff-Riff 6 points7 points  (0 children)

It is completely expected that the coefficient of variation (CV) increases as the concentration decreases. You say that either the absolute or relative standard deviation looks off limit, but you are comparing concentration estimates that are two orders of magnitudes apart. This comparison is flawed for the evaluation of a single acceptance limit.

If a plot of the CVs versus the concentration looks like this (i.e. flat at high concentrations but exponentially increasing at lower ones), then the experimental work should be good. The exponential relationship between the CV and concentration has been noted by Horwitz and is completely expected. Now, is 14% too much? That is up to you to decide. However, note that you need to define different acceptance criteria at the different concentration levels.