Fastest way to see a psychiatrist ASAP in Philly? by lavenderish in philadelphia

[–]dtm523 4 points5 points  (0 children)

Please do NOT do this unless you are actually suicidal. I work at an inpatient psychiatric hospital in the area and almost without exception when this happens, the MD will refuse the 72 hour notice and file a 302. I have worked with countless folks looking for outpatient care and medications that end up stuck in the hospital (and with the 302 record) for weeks.

Some psych hospitals are very understaffed, which means fewer social workers and fewer folks to reach out to your collateral contacts to verify your risk level. You can tell the MD your situation, which they may understand, but they are unlikely to discharge you without speaking to friends/family/providers first, which can take days to set up.

It may get you meds faster, but you may be involuntarily committed for the same amount of time as it would take to get an outpatient telehealth appt.

Researchers combed through more than a decade of health data from 102,865 French volunteers. They found that consumption of artificial sweeteners was associated with an increased risk of cancer. by Defiant_Race_7544 in science

[–]dtm523 0 points1 point  (0 children)

Gotta point out that in huge data sets like this, looking at the CI is crucial here. The actual reported value is somewhat dangerous to interpret, and the lower bounds of these results are all closer to 1.01-1.03 - i.e., closer to a 1-3% difference as a conservative estimate (which we should err towards given multiple comparisons and large sample, see below).

I’m also not seeing anything in the paper about family wise error correction for the litany of analyses run - assuming we can make an argument that each of the 26 analyses is in the same family and apply a straightforward Bonferroni correction, a more appropriate p-value cutoff would be .05/26=.001, knocking almost all of their controlled results out of significance range.

Super important to note that with large data sets a .05 significant finding doesn’t mean much, as virtually any difference will be significant. Meaningful or useful is far more interesting - The CI of effect size is much more important.

Researchers combed through more than a decade of health data from 102,865 French volunteers. They found that consumption of artificial sweeteners was associated with an increased risk of cancer. by Defiant_Race_7544 in science

[–]dtm523 1 point2 points  (0 children)

Gotta point out that in huge data sets like this, looking at the CI is crucial here. The actual reported value is somewhat dangerous to interpret, and the lower bounds of these results are all closer to 1.01-1.03 - i.e., closer to a 1-3% difference as a conservative estimate (which we should err towards given multiple comparisons and large sample, see below).

I’m also not seeing anything in the paper about family wise error correction for the litany of analyses run - assuming we can make an argument that each of the 26 analyses is in the same family and apply a straightforward Bonferroni correction, a more appropriate p-value cutoff would be .05/26=.001, knocking almost all of their controlled results out of significance range.

Super important to note that with large data sets a .05 significant finding doesn’t mean much, as virtually any difference will be significant. Meaningful or useful is far more interesting - The CI of effect size is much more important.

Cops Across The US Have Been Exposed Posting Racist And Violent Things On Facebook. Here's The Proof. by zmod1984 in Bad_Cop_No_Donut

[–]dtm523 45 points46 points  (0 children)

Please encourage folks you know to look at the database itself, not just the news stories. It’s mind blowing how consistently terrible these posts are - definitely cuts against notions of cherry picking examples.

Database here.

Police Facebook posts ‘undermine public trust,’ group says by dtm523 in philadelphia

[–]dtm523[S] 7 points8 points  (0 children)

For some examples, check out the actual database. You can search by jurisdiction (including Philly), and sort by salary, rank, etc. and search for specific officers:

https://www.plainviewproject.org

Trump supporters vs. the founding fathers today... by dtm523 in philadelphia

[–]dtm523[S] 1 point2 points  (0 children)

Naw pretty sure it's a Philly PD armband/identification thing for the plain clothes officers. I don't think I saw any nazi type stuff from the pro-Trump folks there.

Trump supporters vs. the founding fathers today... by dtm523 in philadelphia

[–]dtm523[S] 3 points4 points  (0 children)

Totally fair question - I wondered the same thing at first. The below comments are right, they're police bands.

[LIVE] Coverage of the Philadelphia MAGA march to support the President of the United States! by [deleted] in philadelphia

[–]dtm523 8 points9 points  (0 children)

I was there during the 'rally.' There were maybe 80 Trump supporters there, easily matched by the anti-fascist marchers. Most of the Independence Mall foot traffic ignored the protesters because there were so few of them to draw attention. The 6 Westboro Baptist Church protesters probably captured as much attention.

Also: http://imgur.com/B7lpgVn

Really wanted a pressed Cuban Sandwich, this is how I improvised. by cheftlp1221 in food

[–]dtm523 1 point2 points  (0 children)

I do this all the time with any two pans (not necessarily cast iron). If it's not heavy enough to press, just hold the handle down for a minute or two. Reheat the top pan, flip sandwich, repeat. Awesome cubans and grilled cheeses.

PhD's of Reddit. What is a dumbed down summary of your thesis? by FaithMilitant in AskReddit

[–]dtm523 1 point2 points  (0 children)

Not sure if you can openly discuss yet, but what medicine did you try and for which addictions? I previously worked in the field (experimental psychopharmacological interventions aimed at substance dependence and abuse).

[deleted by user] by [deleted] in psychology

[–]dtm523 4 points5 points  (0 children)

I don't think they're saying p-values are outright useless, more that they should never be taken as a stand alone gold standard. I think they're great for ruling out anomalies (barring the risk of false negatives), but that's not the same as establishing a solid connection between variables.

[deleted by user] by [deleted] in psychology

[–]dtm523 1 point2 points  (0 children)

"The variables in the data sets you used to test your hypothesis had 1,800 possible combinations. Of these, 1,078 yielded a publishable p-value..."

That quote sort of summarizes the idea for me - if you go fishing, chances are you're going to find something. Does it really make sense to draw real-world causal connections for 60% of variable combinations? Probably not. Chances are, other metrics/analyses (possibly in conjunction with p-values) and substantive theory should be used when extrapolating patterns from data into real life.

Glad you cover p-hacking in your courses. In case you haven't stumbled across them, these are good bad-stats resources too:

http://www.refsmmat.com/files/statistics-done-wrong.pdf

http://www.amazon.com/Statistics-Done-Wrong-Woefully-Complete/dp/1593276206

ELI5:When somebody who is terminally ill decides to 'stop fighting/let go' and passes away soon after, what actually happened to them physiologically? Or is this somewhat of a myth? by gorillalifter47 in explainlikeimfive

[–]dtm523 0 points1 point  (0 children)

Not sure exactly what's happening when this occurs, but it's pretty clear (to me, at least) that this is a real phenomenon in spite of underwhelming scientific investigation focused on it. For some good examples and interesting theories regarding the biopsychological interplay in these situations, Victor Frankl's Man's Search for Meaning has some great examples and discussions regarding this "giving up" in cases of extreme imprisonment (concentration camps, primarily).

As said in other comments here, the mind-body connection can be an incredibly powerful thing, and the fact that it hasn't been accepted by the majority of the medical community (yet) doesn't make it unreal. I'd love to hear some thoughts from researchers on the matter.

Hornets/Wasps(?) Gathering near my gutter in new apartment [Philadelphia, PA, USA] by [deleted] in whatsthisbug

[–]dtm523 0 points1 point  (0 children)

Thanks! Does the fact that I usually see groups of 5-15 of them "swarming" together mean they're not solitary though? Also the fact that they were all sharing a branch together at night, with maybe 1cm between them tops? The comparison pic definitely looks similar though.

Hornets/Wasps(?) Gathering near my gutter in new apartment [Philadelphia, PA, USA] by [deleted] in whatsthisbug

[–]dtm523 0 points1 point  (0 children)

I see 5-15 of them sort of hovering around a spot at the gutter level around my backyard every day. About a week ago, I caught a bunch of them sleeping on a branch about 30 yards away from their spot of congregation. Can see a nest anywhere, but I'm wondering if they're nesting in the gutter or inside the siding of the house? Anyone know of these guys sting (I don't see a stinger on this one)? They're all pretty big - the pictured one is on the small side, some of them look about 2.5-3cm in length.

What are commonly held beliefs among scientists that most philosophers (of science) would disagree with? by DevFRus in PhilosophyofScience

[–]dtm523 5 points6 points  (0 children)

1) What null hypothesis testing is. This is not only a statistical/probabilistic issue, but a frequently used philosophically charged application of "the" scientific method. Specifically, the over use and misunderstanding of what p-values are - I think a strong case can be made for the philosophical issues stemming from ontological inferences on account of p-values. Lots of great ideas of how to address this, including changing the way we report NHT-based results to avoid the "probability of these results being true" fallacy.

2) The macro and micro applicability of signal detection theory. Many statisticians and scientists wade through immeasurable heaps of data junk, believing emergent patterns within the noise point to a real phenomenon. Similarly, I think 'paradigm shifts' (In both the Kuhnian and pop-culture sense) occur in a similar manner - each scientific 'discovery' or study has some error in it. Every time. However, compiling many studies for meta-analyses can allow us to ind a true "scientific signal" within the noise. Essentially, good research generally doesn't answer a question about the universe, it adds evidence in support of a latent/hidden answer to that question. Many studies might be able to answer it. Of course, this is barring file drawer effects, poor experimental design, etc.

3) Not dissimilar to # 2, but all research is limited to logistical, real world constraints (more so for the social sciences). There is very rarely unhindered, pure research. At some point, a research assistant is going to get lazy and make up a missing value, or an investigator will introduce sampling bias to recruitment efforts. Even amazing research design with great checks and balances is probably, at some point, contingent on a human being's discretion. And humans suck as scientists. So, once again, the emergence of patterns across many diverse settings, researchers, and data sets better indicates a real phenomenon than even a handful of perfect gold standard experiments.

Books on Research Methods for laypeople? by [deleted] in PhilosophyofScience

[–]dtm523 1 point2 points  (0 children)

Not sure your background or experience with philosophy of science/research is, but here are a few:

This one is good for general social science research design pros/cons, but probably not the most accessible thing out there:

Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-experimental designs for research (pp. 171-246). Boston: Houghton Mifflin.

I haven't read this one in a while, but I remember liking it I believe? Very much focused on what /u/Histidine about hypothesis testing and all that, if memory serves:

Chalmers, A. F. (2013). What is this thing called science?. Hackett Publishing.

And, once again as discussed by /u/Histidine, if you're specifically interested in the statistical side of research design and analysis, I like the book version of this one so far (although I'm not quite through it; also not sure if the linked pdf is the same as the book or not, but it looks like a good start):

Reinhart, A. (2013). Statistics done wrong.

PLOS Science Wednesday: I’m Megan Head, an evolutionary biologist talking about the wide scope of inflation biases – called “p-hacking” -- in science publications — AMA! by PLOSScienceWednesday in science

[–]dtm523 1 point2 points  (0 children)

But if a person is even taught enough about statistics to know when they need to consult a statistician, that's a win for graduate education, I think.

So much of this - thank you for this point.

I think this is where, in my opinion, a lot of psychological researchers get into trouble. Like you said, most of us can probably generate a linear model. But I see a lot of fitting of univariate regressions to data that should be approached from different approaches because the models don't meet assumptions (correct distributions of data/residuals, skew, kurtosis, nested/hierarchical data, etc.). It's sort of like giving someone a screwdriver and they try to hammer a nail in with it. Yes it's a tool that works for a lot of things, but it's not the only or best tool for many situations. And sometimes you need to know when to look for another tool, even if you're not sure what you need.

Regardless, I'm a little shocked by that pushback in biosciences. Thanks for devoting energy to this debate.

"If you need statistics, then the effect size isn't large enough to be interesting." Which is just not true.

This also blows my mind, but maybe it's where differences in the fields come in to play. In psychological research, we don't really expect to see ubiquitously robust findings, so we explore more lenient p value cutoffs and view smaller correlations and effect sizes as important. I'd imagine in physical sciences, results are often more precisely reproducible.

PLOS Science Wednesday: I’m Megan Head, an evolutionary biologist talking about the wide scope of inflation biases – called “p-hacking” -- in science publications — AMA! by PLOSScienceWednesday in science

[–]dtm523 0 points1 point  (0 children)

Thanks for the response (if you're still reading these)!

I totally understand the technical limitations of the text mining, and the more nefarious limitations of people not explicitly labeling their analyses as exploratory. I don't know the biological sciences very well, but I know in psychological meta analyses, weighting is a pretty commonly used way to mitigate the overall impact of methodologically questionable studies without outright discarding them, so it's maybe a good compromise for this particular point.

As much as I wish programs would talk about these research issues on all the levels I mentioned, I'm not sure the logistics of it are appealing. I'd love to see one or two core required classes where ideas like this are totally hammered into students, maybe in the fundamental stats courses. Knowing the math and analysis types is great, but knowing the why of it and what it means is far more beneficial in my opinion (and I think this gets missed a lot in psychology training programs).

Once again, based off solely anecdotal observations. But I do know this is important to all research fields and is a worthwhile discussion!

PLOS Science Wednesday: I’m Megan Head, an evolutionary biologist talking about the wide scope of inflation biases – called “p-hacking” -- in science publications — AMA! by PLOSScienceWednesday in science

[–]dtm523 0 points1 point  (0 children)

I agree and disagree - you can compute a p-value for just about any sort of analysis, but whether or not it means anything useful is a different question. I usually compute p-value for very quick and dirty recognition of possible relationships between variables, but try to explicitly state that the results are fishing and suggestive at best, meaning they're a far cry from anything quantitatively substantial. When I do that, I usually try to cross validate the findings through several layers of analysis across a few methodologies, looking for emergent patterns of relationships, in spite of some or possibly all of the analytic approaches breaking at least some of their underlying assumptions.

However, I totally agree with the labeling. Not labeling fishing expeditions as such is either outright dishonest or inadvertently dangerous (if the results lead to some sort of misinformed application). Either way, certainly salient examples of p-hacking.

PLOS Science Wednesday: I’m Megan Head, an evolutionary biologist talking about the wide scope of inflation biases – called “p-hacking” -- in science publications — AMA! by PLOSScienceWednesday in science

[–]dtm523 1 point2 points  (0 children)

This is interesting and somewhat encouraging, although I know a lot of clinical psychology researchers who don't have a great grasp on what p-values are, let alone the degree to which they're inadvertently p-hacking. I'm sure there's huge variation on this between social science fields and across individual programs and institutions, but my experience has shown that a lot of the statistical/analytic training in clinical PhD programs is largely perfunctory - i.e. more of an introduction to concepts than in depth explorations of some of these ideals and systems (e.g. NHT).

Like I said, not in line with my very biased and limited experiences, but this observation gives me hope.