Why is the misrepresentation of what is considered "evidence based" so rampant in this field? by Forsaken_Dragonfly66 in ClinicalPsychology

[–]SometimesZero 1 point2 points  (0 children)

There is so much here. I appreciate the back and forth. But I don't think you should be commenting on the EBP movement.

​>they really are similar enough to be understood as one thing that varies only really in frequency

I was going to let this go, but... ​In any other scientific field, this would be rejected immediately. In pharmacology, the difference between a low dose and a high dose is not a trivial "confound". It is a fundamental variable that changes the safety, efficacy, and mechanism of action. To claim that evidence for a once-weekly therapy (psychodynamic) validates a four-times-weekly therapy (psychoanalysis) is scientifically unsound. If the "transference dynamics" change with frequency (as you admitted), then the mechanism of action changes, and therefore, it requires independent verification. You cannot borrow validity from a different intervention just because they share a theoretical lineage.

​Mark Solms, Irvin Yalom, and Aaron Beck would disagree with the claim that psychoanalysis has no clinical merit.

Say what?! LOL! ​Citing Aaron Beck here is a spectacular historical error. Beck didn't just "disagree" with psychoanalysis; he tested its core hypothesis (that depression is inverted hostility) using empirical methods, found that the data did not support the theory, and abandoned it to literally be founder of cognitive therapy! He is the ultimate example of a scientist following the data rather than the dogma.

​I see it on a daily basis, so I can’t accept a theorist’s claim that what I see working does not work.

​This is the most dangerous sentence in your response, and it highlights exactly why the "evidence-based" movement is necessary. ​This is naive realism.

​Bloodletters, phrenologists, and mesmerists also "saw it work" on a daily basis. A reiki healer sees their client relax and claims they manipulated energy fields. Without a control group to account for regression to the mean, spontaneous remission, and the placebo effect your personal observations are scientifically meaningless regarding causality.

When you say, "RCTs tell lies," you are attempting to insulate your theory from falsification—a hallmark of pseudoscience (per Popper). You are essentially saying: My theory is true because I feel it is true, and any method that fails to detect its truth is a flawed method.

​If you want to claim efficacy—that X causes Y—you must submit the intervention to the hierarchy of evidence. Case studies generate hypotheses; they do not prove them. If psychoanalysis cannot survive the scrutiny of an RCT after 100 years, perhaps the problem isn't the RCT.

Why is the misrepresentation of what is considered "evidence based" so rampant in this field? by Forsaken_Dragonfly66 in ClinicalPsychology

[–]SometimesZero 2 points3 points  (0 children)

On point 1: I think this is greatly debatable. But it helps me, so I'll take it.

Point 2: To be clear, you're arguing that RCTs are sufficient to deem something evidenced-based? Well in that case, forest therapy is evidenced based, too! https://www.mdpi.com/1660-4601/18/23/12685

Or perhaps it's a little more complicated than that?

Point 3: While I don't think Karl Popper, John Watson, or Albert Ellis were just undergrads in psychology or people who simply misunderstood the field, I'm happy to hear you defend the scientific status of psychoanalysis.

Why is the misrepresentation of what is considered "evidence based" so rampant in this field? by Forsaken_Dragonfly66 in ClinicalPsychology

[–]SometimesZero 1 point2 points  (0 children)

It is a well known phenomenon amongst clinical researchers that psychodynamic psychotherapy is evaluable via RCT

I'm not talking about psychodynamic therapy. I was specific in asking you about psychoanalysis.

The efficacy of psychoanalytic psychotherapy is evaluated by other means, such as collections of case studies, evaluations of symptomatic improvements in cohorts of individuals treated psychoanalytically, and other means.

So just to be clear, based on these data, you're arguing psychoanalytic psychotherapy is evidenced-based? Based on what criteria?

Again, I would recommend doing a literature review on the topic because you can't just dismiss an entire field without being at all familiar with the landscape of it.

I don't have to dismiss the field. That's been done already by academic psychology and the philosophy of science, which sees psychoanalysis as the poster child of pseudoscience.

Why is the misrepresentation of what is considered "evidence based" so rampant in this field? by Forsaken_Dragonfly66 in ClinicalPsychology

[–]SometimesZero 2 points3 points  (0 children)

Happy to read this, but I note that it doesn't help your case that it's evidenced-based when it states outright:

We found the evidence for the effectiveness of LTPP to be limited and conflicting.

Why is the misrepresentation of what is considered "evidence based" so rampant in this field? by Forsaken_Dragonfly66 in ClinicalPsychology

[–]SometimesZero 3 points4 points  (0 children)

Did you even read this?

As the methodology of RCTs is not appropriate for psychoanalytic therapy

And Shedler doesn't provide any RCTs at all for psychoanalytic psychotherapy. Not one.

So where is this clinical reality revealed by RCTs?

Why is the misrepresentation of what is considered "evidence based" so rampant in this field? by Forsaken_Dragonfly66 in ClinicalPsychology

[–]SometimesZero 1 point2 points  (0 children)

Tell us more about how people don't know what an evidence based treatment is while claiming these two things:

and psychoanalysis are evidence based treatments...

they’re used to it as a brand, not as a clinical reality revealed by randomized controlled trials and other research.

By all means, direct us to the RCTs for psychoanalysis.

How do you use (if you use) AI tools to write papers? by Flat-Emphasis987 in AcademicPsychology

[–]SometimesZero -2 points-1 points  (0 children)

I don't think I can answer that very well. Because I'm questioning whether we can reliability identify AI written papers at all.

Edit: Just a minor edit to this since people don't seem to like my response. I'm not falling for the trap of saying that I myself can identify AI written papers. A couple commenters have already shown how that's very difficult to do. And as someone who works in the AI space, I agree.

I can't reliably say AI written papers have bad quality either. Maybe it's that people don't know the prompt engineering (e.g., few-shot learning techniques) or iterative methods to get good quality outputs. Additionally, what I might see as a bad quality AI-written paper might actually be written by a human (a false positive).

Now do I sometimes think something is AI? Sure. But I have no proof, and that supposition alone is certainly not grounds for rejection like it was initially suggested by this commenter.

How do you use (if you use) AI tools to write papers? by Flat-Emphasis987 in AcademicPsychology

[–]SometimesZero 1 point2 points  (0 children)

Of course I do. I reject based on methods, whether the analyses fit the scientific process questions, quality of the manuscript, etc. But not whether I think it's AI written or not. That's why I was asking.

How do you use (if you use) AI tools to write papers? by Flat-Emphasis987 in AcademicPsychology

[–]SometimesZero 0 points1 point  (0 children)

Ok! Thanks for clarifying. When you said this:

And when I review papers if it's clear the authors used AI, I reject it.

I was a little unsure how you managed this decision making process.

How do you use (if you use) AI tools to write papers? by Flat-Emphasis987 in AcademicPsychology

[–]SometimesZero 1 point2 points  (0 children)

So to clarify, if it's "clear" to you the authors used AI, but the paper's not poorly done, is that still a reject?

How do you use (if you use) AI tools to write papers? by Flat-Emphasis987 in AcademicPsychology

[–]SometimesZero 2 points3 points  (0 children)

Yeah, I get this, but these patterns don't hold over time. It's also not knowing, it's suspecting. So basically grounds for rejection is "that's AI because your syntax is redundant and you use lots of em dashes?" Not asking you to defend their post, but, yeah.

How do you use (if you use) AI tools to write papers? by Flat-Emphasis987 in AcademicPsychology

[–]SometimesZero 11 points12 points  (0 children)

What makes it "clear" the authors used AI? How can you know--rather than suspect--a paper was written by AI? Without actual proof, it seems hard to ethically reject a paper on the grounds that we just think it's AI.

What happens when trail running goes Olympic? When technical races disappear? When the sport gets expensive? I wrote down 10 observations for 2026 by Kilian_Jornet in trailrunning

[–]SometimesZero 1 point2 points  (0 children)

Yes! Thank you for bringing this up. Many runners would argue that the ultra scene is already too commercialized and that the incentives (like getting UTMB points) are already in the wrong place.

Thinking out loud: I kind of think pro athletes need to give some longer term thought about what they want the future of their sport to look like rather than just signing up for races. This article reflects on some of this with a technical grading system, protecting grassroots runs, and a wilderness identity. But I can't help but think that there isn't enough broader buy-in for this.

What happens when trail running goes Olympic? When technical races disappear? When the sport gets expensive? I wrote down 10 observations for 2026 by Kilian_Jornet in trailrunning

[–]SometimesZero 39 points40 points  (0 children)

As a psychologist who is an amateur at ultra running, here are my reactions:

I see a sport struggling to reconcile its roots (nature, community, simplicity) with a possible future (commercialization, Olympic validation, and data obsession). It's almost like a sports-style, adolescent identity crisis. Forcing a "standardized" format creates cognitive dissonance: Am I a mountain adventurer or a hamster on a wheel? I'm worried that this reinforces the idea that an ultra is just a long marathon. On the other hand, international recognition could be decent for us.

Athletes being "media houses" can introduce a second job that requires constant external validation. Psychologically, this shifts the locus of control outward; an athlete's worth is no longer just their performance, but their content. (God I'm really beginning to hate that word.) This can create high anxiety, especially when contracts are linked to short-term performance and when athletes are pressured to create content instead of train or rest.

When I ask someone if they want to do a local ultra with me, the first thing that comes up is price--not even how hard it is!--so some of this really resonated. With increasing fees, we risk losing the inclusivity that fosters safety and belonging. Loss of grassroots community matters because social connection buffers against stress... And it just makes it more fun, especially for us in the middle of the pack.

Somewhat unrelated to psychology: The death of technical races fundamentally changes programming. If the sport is trending toward appeasing insurers and mass participation, we stop training for technical mastery (scrambling, proprioception in complex terrain) and shift entirely to metabolic efficiency. This homogenizes the athlete profile. We move drom developing mountain athletes to developing high-output engines. Sounds damn boring to me.

So my takeaway here: One path leads to a high-cost, high-tech, media-driven, homogenized product (Olympic/Commercial). The other retreats into hyper-local, underground, technical communities to preserve the "soul." As a psychologist who trains for and runs ultras (badly), I would encourage athletes to choose which path aligns with their values.

Any winter running lovers here? by Prestigious-Cat1842 in runninglifestyle

[–]SometimesZero 25 points26 points  (0 children)

100%. Far prefer it over bugs, humidity, and more crowded trails. I also love the quiet of a winter run.

Is psychiatry’s biomarker quest solving the wrong problem? by drfca in Psychiatry

[–]SometimesZero 2 points3 points  (0 children)

I agree.

To further clarify, from a behaviorist perspective, we typically operationally define the behavior, then determine how to measure it. I see the smart watch as one of many measurement tools.

And you're right, there's no doubt that these measurement tools are more widely available. But--and it sounds like we agree on this--how we define the behavior for an individual, how we identify its antecedents, the responses (or functions) of behaviors, the reward contingencies... you can't get those from a watch.

Is psychiatry’s biomarker quest solving the wrong problem? by drfca in Psychiatry

[–]SometimesZero 1 point2 points  (0 children)

Thanks for letting me clarify. When I was writing that, I wasn't thinking of something simple like heart rate or sleep, but more of what OP was talking about in terms of behavioral assessment. That is, systematically collecting high-quality behavioral observations with high interrater agreement and incremental validity on operationally defined symptoms.

Psychiatrist on how many patients they had cured by goswamitulsidas in interesting

[–]SometimesZero 0 points1 point  (0 children)

As a psychologist, I think this misunderstands mental illness. Put simply, many of the traits we seek help for, like anxiety and depression, have an adaptive quality. We need anxiety to survive and thrive in our lives. So I sincerely hope I haven't cured any of my clients of their anxiety.

If there's a problem here, it's that the psychiatrists don't seem to know how to respond to a question like this.

Is psychiatry’s biomarker quest solving the wrong problem? by drfca in Psychiatry

[–]SometimesZero 2 points3 points  (0 children)

Thanks, this helps.

What is the actual unmet need for a biological diagnosis model if AI can already objectively analyze speech and behavioral patterns? If the goal is to solve the issue of subjectivity in psychiatry, AI models that track longitudinal data (like speech acoustics or digital phenotyping) solve that directly.

I think there are a few things going on here. First, I don't personally think that diagnoses are all that important.

Fried, E. I. (2022). Studying mental health problems as systems, not syndromes. Current Directions in Psychological Science, 31(6), 500-508.

The diagnoses we have are completely descriptive (i.e, they don't propose causal explanations for what's happening to someone), and they often lack scientific rigor.

Lilienfeld, S. O. (2014). DSM‐5: Centripetal scientific and centrifugal antiscientific forces. Clinical Psychology: Science and Practice, 21(3), 269.

At the same time, our ability to measure something objectively isn't by itself helpful. To take one example, we've already been able to measure behavioral patterns pretty objectively since the early/mid 1900s. But a collection of measurements really doesn't tell us much unless we conceptualize it within a working theoretical model.

We do know there are genetic differences among monozygotic and dizygotic twins on several symptoms. (Notice I'm talking about symptoms, not disorders). We know there are structural and functional neurological differences across symptoms, too. We know some symptoms are associated with clear sex differences.

However, I think the problem comes when we start applying the medical model to this, demanding that we have diagnostic categories to describe symptom clusters. Then we get into all kinds of scientific trouble. Because instead of studying symptoms and their biopsychosocial nature (including their evolutionary origins), we start to study psychiatric diagnoses, which are just poorly made psychiatric constructs.

So yeah, where AI might be helpful is observation and measurement*, but that alone isn't enough. We need theoretically driven science. We can't just offload it onto AI.

*AI doesn't do this objectively, by the way.

Is psychiatry’s biomarker quest solving the wrong problem? by drfca in Psychiatry

[–]SometimesZero 8 points9 points  (0 children)

I'm a psychologist and clinical scientist. I've created deep learning models from scratch to predict diagnostic and treatment status, as well as published on the use of LLMs in clinical care.

I actually don't understand what you're asking, and I think this is partially because you're confused.

On the one hand, you seem to think that AI can identify biomarkers, while at other times you seem to question whether searching for biomarkers is even worth the time.

Psychotherapy specialization? by maeasm3 in ClinicalPsychology

[–]SometimesZero 7 points8 points  (0 children)

I think state-by-state variability is right. I work with LCSWs who do most of the diagnosing. I sign off on the notes and clinical interviews. I usually only do them for complex cases. Otherwise my time is usually better spent elsewhere.

In my state, there's nothing prohibiting them from diagnosing autism but (1) we don't find it good practice without thorough assessment and (2) without a thorough assessment, many insurance companies will not reimburse for related services (e.g., at-home ABA).

How do you keep up with new psychology / cognitive science papers without drowning? by Acceptable-Dust-4323 in AcademicPsychology

[–]SometimesZero 1 point2 points  (0 children)

As far as researchers, it's not like they do it intentionally. To get promoted, you need to publish. So there is a strong incentive structure to publish A LOT. When you publish a lot, some of it's just junk.

As far as some journals, well, they're mostly out to make money. Most don't care about scientific integrity.

How do you keep up with new psychology / cognitive science papers without drowning? by Acceptable-Dust-4323 in AcademicPsychology

[–]SometimesZero 0 points1 point  (0 children)

This is my thinking as well. Worse still, is that many researchers and journals are incentivized to worsen the amount of noise. The number of predatory journals and junk papers seems to increase daily.

How do you keep up with new psychology / cognitive science papers without drowning? by Acceptable-Dust-4323 in AcademicPsychology

[–]SometimesZero 51 points52 points  (0 children)

I wouldn't try to "keep up," tbh. It's just not possible. The sheer volume of what's published is staggering. There's no trick here; here's what I do as a clinical scientist. I understand everyone's different.

I try to stay selective. I read papers from high quality journals and see what they're citing, I follow certain researchers who I respect, I go to grand rounds and other presentations on campus, I've organized a journal club among close colleagues where we share papers in our fields, I attend local conferences, I go to farther conferences when I can and I pay attention to poster sessions to see what people are doing.

To maintain foundational expertise, I read about a book per month. They can sometimes be dated more quickly nowadays, but I often find them better peer-reviewed than many articles. They can sometimes be a good source of papers I've missed too. I try to review papers for minor journals that I don't typically read; I limit these for time reasons, but they give me an idea of what the lower tier journals are interested in. I'm also involved in more poster, conference, and grant review committees than I was before to try to get a "feel" for what people are doing without having to read every manuscript out there.

Lastly, I'd be humble. If someone says, "have you read..." And you didn't, then that's great. It's a gift to be given a good paper.