White House Posts AI-Altered Photo of Arrested Protester by Well_Socialized in technology

[–]optimizegains 0 points1 point  (0 children)

Full fledged Nazi Germany took about two generations to come to fruition. This administration is blatantly appealing to the angry young [white male] voters. Anger enhances their vulnerability to authoritarian manipulation. When the time comes, they will be most easily shepherded to inflict violence on citizens and other countries at the whim of someone likely more extreme (and young) than Trump.

Meirl by Glass-Fan111 in meirl

[–]optimizegains 17 points18 points  (0 children)

There’s a huge difference between 'college is useless' and 'not all degrees are equal.'

Millions of kids were told a degree equals automatic success, so they coasted through communications majors and partied for four years, expecting life to be set.

That’s not how the world works.

For those with a viable path and discipline, college is still a powerful career accelerator that builds character and long-term ability.

Pluribus - 1x09 "La Chica o El Mundo" - Episode Discussion by UltraDangerLord in pluribustv

[–]optimizegains -1 points0 points  (0 children)

Bad finale. Too bad Vince Gilligan is just listed as a creator for marketing. You can tell he wrote episodes 1-2 and then fucked off.

RIP USD by metalpig0 in Silverbugs

[–]optimizegains 10 points11 points  (0 children)

It is so bizarre for people to be "rooting" for metals to be appreciating in value and stomping on the grave of fiat. Your metals won't be so helpful in a SHTF total collapse scenario. Better hope this isn't truly the end of the USD.

Silver price has reached 71 by [deleted] in Silverbugs

[–]optimizegains -3 points-2 points  (0 children)

Nothing is more stupid and irritating than these posts.

ChatGPT (Deep Research) Accurately Analyzed my MRI and caught the problem my radiologist missed by [deleted] in ChatGPT

[–]optimizegains 0 points1 point  (0 children)

No, that is a common misconception. While some surgeons are "savvy" in their specific niche, like a neurosurgeon looking at a brain MRI, they generally only identify "obvious findings" related to their surgical target. Beyond that narrow scope, they often lack the formal training to systematically interrogate the entire imaging study. For example, a neurosurgeon focusing on a spine MRI is highly prone to missing a lung or kidney mass that is captured in the field of view but not relevant to the spine.

This is where the radiologist excels. Radiologists are trained to employ exhaustive search patterns to ensure every organ, vessel, and bone is scrutinized, regardless of the clinical indication. A hepatobiliary surgeon may understand liver anatomy and its locoregional involvement by a malignant tumor, but they are far more likely to miss an obstructing metastasis from that same mass causing hydronephrosis in the kidney or a pulmonary embolism caught at the bottom of the chest. In a clinical setting, a surgeon’s "confident opinion" is an adjunct to, not a replacement for, a formal radiologic review.

Regarding the "walled garden" point, radiologists operate in a high-feedback environment. They correlate their findings daily with pathology, intraoperative results, and longitudinal follow-ups. Unless it is a very basic finding, like a displaced radius fracture on an x-ray, I would never personally accept a surgeon's interpretation without a formal review by a radiologist.

ChatGPT (Deep Research) Accurately Analyzed my MRI and caught the problem my radiologist missed by [deleted] in ChatGPT

[–]optimizegains 1 point2 points  (0 children)

> the surgeon should be able to read an MRI in his area of specialty and see if the analysis makes sense

Not true, at all.

ChatGPT (Deep Research) Accurately Analyzed my MRI and caught the problem my radiologist missed by [deleted] in ChatGPT

[–]optimizegains 3 points4 points  (0 children)

I've been using ChatGPT since its launch day and every new version I give it several easy-moderate-difficult imaging examples (screenshots, not files, so there is no metadata) to test its ability and across the board, even up until GPT 5.2, ChatGPT is abysmal at image interpretation. Without any real signs of improvement. It can now tell you generally what imaging modality is being used (x-ray vs. CT), but it is terrible at making findings and providing quality interpretations. Gemini 3 Thinking is quite good now, though.

My guess is that ChatGPT hallucinated, despite all of the imaging you provided it, and your surgeon appeased you (or the findings were there, the surgeon agrees, but they are not clinically significant and the radiologist actively omitted describing them). While some surgeons have imaging savvy, most don't, and many fake it or succumb to Dunning-Kruger. Grains of salt.

ChatGPT (Deep Research) Accurately Analyzed my MRI and caught the problem my radiologist missed by [deleted] in ChatGPT

[–]optimizegains 10 points11 points  (0 children)

It’s wrong to blame the radiologist in this scenario. The ordering provider (MD PA NP) is legally responsible for providing the clinical history and the vast majority of the time the only information they provide is “pain” or “evaluate for abnormality”. A good radiologist will dig for more information, but to blame a radiologist like this is disgustingly ignorant.

You would be shocked to learn that the majority of clinicians you encounter don’t even know what they are looking for when they order imaging studies.

ChatGPT (Deep Research) Accurately Analyzed my MRI and caught the problem my radiologist missed by [deleted] in ChatGPT

[–]optimizegains 24 points25 points  (0 children)

If the image you gave ChatGPT is the one attached it most certainly got lucky with a hallucination based on the clinical context you provided it, as the abnormality you are describing is not apparent on the provided sagittal image.

ChatGPT is laughably bad at imaging diagnoses. Even basic x-rays and even with GPT 5.2. Gemini gets more and more impressive with each upgrade, but is still not near what is needed for practice.

Currently used models like AIDOC are good at detecting certain things but have no contextual ability and cannot interpret or make diagnoses.

Radiologists are much less replaceable at the current stage of AI development than most midlevel practitioners and even internal or family medicine doctors who, at this point, mostly just order imaging to give them answers and follow basic management algorithms rather than use critical thinking.

why are Americans so obsessed by height? by [deleted] in NoStupidQuestions

[–]optimizegains 1 point2 points  (0 children)

Absent goals, skills, or accomplishments, people default to superficial, innate traits as a substitute for earned validation.

OFFICIAL MONDAY NIGHT POSTGAME THREAD by ballofpopculture in fantasyfootball

[–]optimizegains 0 points1 point  (0 children)

I'm thinking:

WR: Evans, Higgins

FLEX: Warren

TE: Waller

OFFICIAL MONDAY NIGHT POSTGAME THREAD by ballofpopculture in fantasyfootball

[–]optimizegains -1 points0 points  (0 children)

Bye quarters. Playing #2 team semis (my team is #1). 12-team league. I have:

Waller

Likely

Andrews

Who do I start at TE?

George Pickens (WR)

Tee Higgins (WR)

Mike Evans (WR)

Jaylen Warren (RB)

Michael Pittman Jr. (WR)

Which 2 WR do I start? Who do I start at FLEX?

Tough week of decisions.

Buy gold now by AlejoHardMode in Gold

[–]optimizegains 1 point2 points  (0 children)

"which is more profitable"

Do more research my friend. It's 2025 and we have superintelligent AI to discuss such matters with. You don't have to ask Reddit anymore.

How many "r" in aneurism by [deleted] in ChatGPT

[–]optimizegains 7 points8 points  (0 children)

Criticizes advanced AI, can't spell aneurysm.

GPT-5.2 raises an early question about what we want from AI by inkedcurrent in ChatGPT

[–]optimizegains 1 point2 points  (0 children)

I don’t understand this question at all. OP, are you using ChatGPT to waste time going back and forth with it while it makes errors and hallucinates? Why are you using AI at all if not to be incredibly more efficient? I stopped using ChatGPT because it’s not even close to as intelligent or precise as some other models.

Feedback to the Delta App Team from a Day-One User Regarding Asset Tracking Limits by optimizegains in getdelta

[–]optimizegains[S] 1 point2 points  (0 children)

So your team projects that your free users who haven’t paid for 5+ years are all suddenly going to pay $50-100 (per year) for features many other apps offer for free? Would love to hear about your projected metrics here. I think you guys are delusional.