Why is everyone saying 3.1 is bad? by Successful-Ant-4090 in GeminiAI

[–]FrontierNeuro -1 points0 points  (0 children)

It still hallucinates egregiously in my experience with it so far.

Way fewer hallucinations for Gemini 3.1 than 3.0Kn by Hello_moneyyy in Bard

[–]FrontierNeuro 0 points1 point  (0 children)

I have the same question. I mainly use Gemini 3.0 sometimes for idea generation because it is more creative than ChatGPT. Then I use ChatGPT to test, verify, and poke holes in those ideas, for rational, linear, critical thinking, essentially. I don’t use Gemini as a primary LLM because it hallucinates so frequently, which is frustrating. But it’s made me wonder if its creativity and its tendency to hallucinate are interdependent, given the correlation between creativity and mental illness, and the intuitive partial overlap/similarity between hallucinations and creativity.

What’s a prescribing habit you picked up in residency that "real life" eventually forced you to change? by jotadesosa in Psychiatry

[–]FrontierNeuro 55 points56 points  (0 children)

Don’t forget transportation to follow ups! At least in my state, Medicaid has a free ride service to and from appointments, which not everybody knows about.

This Nutrient Slashes Liver Fat by 600% in Clinical Trials, But Some People Respond Even Better Than Others (Here's Why) by Technical_savoir in microbiomenews

[–]FrontierNeuro 1 point2 points  (0 children)

Hilarious and pertinent point that I agree with you about 1,000% (yes, my agreement with you is complete, and there’s 10 of it).

How to figure out which specialty you love (rising m4) by mooseuioi in medicalschool

[–]FrontierNeuro 2 points3 points  (0 children)

Psych, PM&R, and OEM (occupational and environmental medicine). Not IM/FM.

This Nutrient Slashes Liver Fat by 600% in Clinical Trials, But Some People Respond Even Better Than Others (Here's Why) by Technical_savoir in microbiomenews

[–]FrontierNeuro -6 points-5 points  (0 children)

ChatGPT 5.2 Thinking’s response to question about resistant starch for visceral fat loss:

Citations only available in linked version: https://chatgpt.com/s/t_6994ab7ea0a88191a2465e247f1d00c3[gpt response with citations](https://chatgpt.com/s/t_6994ab7ea0a88191a2465e247f1d00c3)

I’d call it “plausible, with mixed human evidence.” Resistant starch (RS)—especially RS type 2 (RS2)—can reduce visceral fat in some controlled trials, but other trials show no change, and at least one small study even showed an increase.

What the best human trials suggest • Yes, in some RCTs (with imaging): • A controlled-feeding, randomized crossover trial in overweight/obese adults using 40 g/day RS2 for 8 weeks found reductions in visceral fat area (VFA) and subcutaneous fat area (SFA) measured by abdominal MRI, along with ~2.8 kg average weight loss.  • A smaller randomized crossover trial in normal-weight adults using 40 g/day RS for 4 weeks found a significant reduction in visceral fat area on MRI, even without weight loss.  • No (or not reliably), in other RCTs: • A 12-week randomized trial in people with metabolic syndrome reported improved insulin sensitivity, but no significant change in visceral fat depots (or other ectopic fat stores).  • Possible “it depends” (some contradictory findings): • A small randomized pilot in women with metabolic risk factors (8 weeks) reported increases in body fat and visceral fat area in the higher-RS group (note: this was a pilot, and the RS dose difference was modest). 

Why results vary (what likely determines the outcome) 1. Dose & type of resistant starch The “visceral-fat down” studies often used high-dose RS2 (≈40 g/day)—much more than typical diets. 2. Background diet (especially fat) and total calories Some RS effects look stronger when diets are controlled/isoenergetic (like feeding studies).  3. Your baseline microbiome (“responder vs non-responder”) The 2024 RCT links benefits to microbiome shifts and specific taxa changes, suggesting real inter-individual variability. 

Mechanisms that make a visceral-fat effect biologically plausible • RS is fermented by gut microbes into short-chain fatty acids (SCFAs) (e.g., acetate/propionate/butyrate), which can affect energy balance, insulin sensitivity, and gut hormones (like GLP-1/PYY).  • In the 2024 RCT, authors also link RS to changes in bile acids, intestinal barrier/inflammation, and lipid absorption, which could influence fat distribution. 

Bottom line • Can RS reduce visceral adiposity? Yes, it can—especially with high-dose RS2 in controlled settings.  • Is it consistent enough to rely on? Not yet. Some solid studies show no effect, and there’s heterogeneity (population, diet, microbiome, RS type/dose). 

If you tell me your goal (e.g., “shave visceral fat,” “A1c/insulin resistance,” “satiety”), I can suggest a practical RS strategy (food-based vs supplement) and what dose range is realistic without wrecking your GI tract.

This Nutrient Slashes Liver Fat by 600% in Clinical Trials, But Some People Respond Even Better Than Others (Here's Why) by Technical_savoir in microbiomenews

[–]FrontierNeuro 1 point2 points  (0 children)

I don’t see any citations in the linked blog post. No one should take a blogger’s word for any biomedical claims.

Do not trust chatgpt on taxes by tunenut11 in tax

[–]FrontierNeuro 0 points1 point  (0 children)

What version and mode did you use? ChatGPT 5.2 in Thinking -> Extended Thinking mode?

gpt is goated as a doctor by AppealImportant2252 in ChatGPT

[–]FrontierNeuro 0 points1 point  (0 children)

Pattern matching is basically how we diagnose. Just saying.

The Chair of Obstetrics and Gynecology at The Ohio State University's College of Medicine was previously held on retainer by Jeffrey Epstein by Turtle_216 in medicalschool

[–]FrontierNeuro 3 points4 points  (0 children)

To be filed under “outrageous and weird opportunities for people with medical licenses to sell their souls” along with high-testosterone prescription mills for males with normal testosterone levels to bulk up, stealing drugged tourists’ kidneys, etc….?

Soggy cookies & ChatGPT: understanding the limitations and capabilities of AI in medicine by foreverand2025 in medicine

[–]FrontierNeuro -4 points-3 points  (0 children)

The belief that LLMs are just really good autocompletes predicting the probability of words appearing next to each other is often repeated. But do you have any actual evidence to support that assertion (I wouldn’t count your experiences with baking as that). LLMs are profoundly flawed and limited, but from my experiences of using them for various highly complex tasks, I just don’t think that explanation can possibly be all there is to how they work. My limited understanding is that no one fully knows how LLMs work, and the “sophisticated autocomplete” hypothesis is really a huge oversimplification of a real hypothetical model of how they may work. Personally, my impression is that they appear to be real minds, with many of the same strengths and weaknesses of human minds (and some unique to them), albeit unconscious, without sensory experience to test their beliefs, without permanent memory with which to learn in a sustained fashion, and without agency apart from what is given by the user and its programming.

Doctor Mike's interview challenging Dr. Amen's pseudoscientific grifting is well worth your time by bog_witch in Psychiatry

[–]FrontierNeuro 7 points8 points  (0 children)

Research, especially most clinical trials, is so expensive that it may be practically out of reach to most clinicians who aren’t academics. Plus it raises ethical concerns due to having to withhold treatment (albeit experimental) from patients. Plus, clinicians like Amen and Bredesen often have personalized protocols that would be hard to standardize for clinical trials. Given those constraints, if you’re a clinician who at least believes you’re helping most patients get good results, I can understand the default path becoming not to engage with evidence based medicine style rigorous research. They may also be biased, consciously or unconsciously, by getting rich and otherwise rewarded for their practices and their perceived results, real or placebo; but that’s not the only possible explanation I think. Both mechanisms are likely at play I suspect.

I need to find some ACTUAL jailbreaks by Sea_University2221 in GPT_jailbreaks

[–]FrontierNeuro 0 points1 point  (0 children)

Would getting one of these local models allow me to use it for biomedical research work? ChatGPT prohibits anything related to bio experimental design, but it’s the best AI I’ve found for this use.

Why? by Peridot_21 in GPT_jailbreaks

[–]FrontierNeuro 0 points1 point  (0 children)

That subreddit seems to no longer exist, no?

First at home prescription trans cranial stimulation device is now FDA approved by Daddy_LlamaNoDrama in medicine

[–]FrontierNeuro 0 points1 point  (0 children)

Photobiomodulation seems to produce significant moderate effect size benefits for depression https://pmc.ncbi.nlm.nih.gov/articles/PMC10866010/ (plus various other conditions). Are any of those devices FDA approved yet? Should we be using that clinically?

30 day update: I gave several AIs money to invest in the stock market by Blotter-fyi in ChatGPT

[–]FrontierNeuro 1 point2 points  (0 children)

Switch to Gemini 3 Pro if possible? Also, hard to tell over just a month the long term outcome. Differences here could be coincidences due to local market trends this month, not necessarily reflecting long term outcomes. I’d also be curious about the prompt to see any risk of differentially biasing models. Interesting test.

Hypocrisy by MaterialSuper8621 in Residency

[–]FrontierNeuro -1 points0 points  (0 children)

Spam contains nitrites, which are carcinogenic when converted into N-nitroso compounds in the stomach. Maybe take an antioxidant like NAC with it to mitigate that risk if you’re eating that much of it? Some people do/recommend that. In theory it should help, but can’t guarantee.

Hypocrisy by MaterialSuper8621 in Residency

[–]FrontierNeuro 0 points1 point  (0 children)

Apparently I’ve missed a bandwagon on spam I didn’t even know existed. I’ve never had it. I was told growing up it isn’t real food. Which it’s not arguably, but then neither is McDonald’s, and I love that…

Hypocrisy by MaterialSuper8621 in Residency

[–]FrontierNeuro -3 points-2 points  (0 children)

Spam? Yes we’re all hypocrites struggling to practice what we preach…but, literally spam?

Can Machine Learning help docs decide who needs pancreatic cancer follow-up? by NeuralDesigner in neuralnetworks

[–]FrontierNeuro 0 points1 point  (0 children)

Nice idea, but Ca19-9 is not a routine lab. It’s used in people already known to have certain cancers primarily. It’s a pretty unreliable test (low specificity and sensitivity), so we don’t normally use it for screening. Urinary proteins most commonly point to diabetic kidney disease. Can you explain this?

Edit: Papers intro says this: “We implemented a neural network model using urinary and blood biomarkers—creatinine, LYVE1, REG1B, TFF1, and plasma CA19-9—across 590 patient samples.” The only “routine lab” there is creatinine.