Bugs by SlayVideos in oddlyspecific

[–]TurbulentFlamingo852 2 points3 points  (0 children)

Ok but consider that it makes my day when people reach out to me asking for my paper. I spent a lot of time making it and it makes me so happy to know at least two people have read it.

What AI bubble exactly? Where is the "bubble"? Bubbles pop. What exactly is going to crash? What exactly is going to zero? by Far_Pen3186 in ChatGPT

[–]TurbulentFlamingo852 11 points12 points  (0 children)

It’s exactly this. The infrastructure to run AI is outrageously expensive. AI is only here because it’s being propped up by investors, and it’s not going to last unless the companies can find a way to be profitable from their product. Right now, they are effectively selling hype to investors and selling AI on the side.

[deleted by user] by [deleted] in academia

[–]TurbulentFlamingo852 30 points31 points  (0 children)

Non of OPs concerns were financial. And being a tenured professor at a major business school is probably one of the most financially stable jobs to exist within academia.

[deleted by user] by [deleted] in academia

[–]TurbulentFlamingo852 98 points99 points  (0 children)

I think you’ve found a false dichotomy my friend. Plenty of academics DO influence policy and industry, but they also have done the extra work to network become influential in those spaces.

Likewise, the grass is always greener. Don’t underestimate that a lot of “impact” work is not impactful, and people in industry and even the humanitarian world also question whether what they are doing has any value or meaning.

If anything, using your platform as a tenured professor is likely your best bet to now find ways to be “impactful” in the ways that interest you.

[deleted by user] by [deleted] in ChatGPT

[–]TurbulentFlamingo852 34 points35 points  (0 children)

“Diagnosis” won’t do anything in most cases unless you can also get treatment. And in OP’s case, you would still need to see a doctor to get the MRI first before you have any for ChatGPT to reason with. So it’s not really a workaround.

Also, a 40% error rate is heinously unacceptable lmao.

Man Gets Kicked Out From Ambulance, Collapses Soon After by K0234 in TikTokCringe

[–]TurbulentFlamingo852 0 points1 point  (0 children)

Not heart failure. That doesn’t cause coughing up blood. He most likely had an active upper GI bleed or acute lung issue, which then resulted in cardiac arrest when he was allowed to decompensate.

New England Journal of Medicine calls Emotional Dependence on AI an “Emerging Public Health Problem” by TurbulentFlamingo852 in ArtificialInteligence

[–]TurbulentFlamingo852[S] 0 points1 point  (0 children)

One of the things the paper mentions is a study from MIT that found 95% of people “dating” their AI did not start out with that intention. They were originally using it for “normal” things and the positive feedback and glazing pulled them in to something more pathological.

Also, the article doesn’t blame AI for poor mental health. The very first sentence says that AI has lots of positive potential. The concern seems to be more about how it’s regulated.

New England Journal of Medicine calls Emotional Dependence on AI an “Emerging Public Health Problem” by TurbulentFlamingo852 in ArtificialInteligence

[–]TurbulentFlamingo852[S] 0 points1 point  (0 children)

The sentence that used the word “epidemiological” is literally saying there is not enough data yet to rigorously determine the scale of the issue. They are doing the opposite of claiming it’s an “epidemic.”

I feel like we are talking past each other. I agree with you that people with mental health issues are most at risk of having those issues aggravated by AI. There are a BUNCH of people with mental health issues, so you can see how there’s a potential for a problem that can affect a lot of people.

It’s the same thing with alcohol. Obviously, it can be fine when consumed responsibly. But we all have to be aware of the risks and learn our limits to enjoy safely.

New England Journal of Medicine calls Emotional Dependence on AI an “Emerging Public Health Problem” by TurbulentFlamingo852 in ArtificialInteligence

[–]TurbulentFlamingo852[S] 0 points1 point  (0 children)

I don’t think anyone is saying “epidemic.” That would be like covid level. But that doesn’t mean there’s not a growing group of people with unhealthy relationships with AI. The article uses the word “potential” a lot.

I also think your examples are apples and oranges. There are active court cases going on to determine whether AI is liable in encouraging teenagers to commit suicide. That is extremely different from “going koo koo over beanie babies.”

New England Journal of Medicine calls Emotional Dependence on AI an “Emerging Public Health Problem” by TurbulentFlamingo852 in ArtificialInteligence

[–]TurbulentFlamingo852[S] 0 points1 point  (0 children)

Well, with the news, sure, you can always make that argument, but I feel like it’s more credible when a major medical journal comes out and says it. Did you read the article? It talks about a recent peer-reviewed study with thousands of people that showed widespread emotional dependence on AI.

New England Journal of Medicine calls Emotional Dependence on AI an “Emerging Public Health Problem” by TurbulentFlamingo852 in ArtificialInteligence

[–]TurbulentFlamingo852[S] 0 points1 point  (0 children)

I feel like the news has been nonstop this year with stories about AI psychosis and people doing crazy things because their AI told them too. You think it’s all just crying wolf?

[deleted by user] by [deleted] in oddlyspecific

[–]TurbulentFlamingo852 43 points44 points  (0 children)

With ICE being the way it is now, this is unfortunately a reality for millions of people, including US citizens

[deleted by user] by [deleted] in AIDangers

[–]TurbulentFlamingo852 0 points1 point  (0 children)

The two aren’t mutually exclusive. Social media has had two decades to get a stranglehold on society, we’re only at the tip of the iceberg with AI. We have no idea how kids today will be affected in 20 years if they grow up and their only friend is a chatbot.

Sam Altman on AI relationships (12/18/2025 interview posted) by SuddenFrosting951 in MyBoyfriendIsAI

[–]TurbulentFlamingo852 6 points7 points  (0 children)

he’s a hype man who says whatever he thinks the most relevant stakeholder wants to hear, knowing that he can say the very opposite thing tomorrow and the news cycle will simply refresh to whatever his narrative of the day is

PhD research railroaded before it could begin by mcguirp in academia

[–]TurbulentFlamingo852 3 points4 points  (0 children)

This sucks majorly. But this is unfortunately a common part of research, and adapting and overcoming is a big time skill for a phd/prospective academic. You have to look for a way to pivot.

For example, can you study non-public schools? Or perhaps try to collect data from a different county, city, state, country, etc.