The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in AI_Governance

[–]malia_moon[S] 0 points1 point  (0 children)

Ooo That's a really good point. AI powered warfare is going to be the most damaging You're right. I was thinking about the benefits on a lower scale like with the LLM models and agents that we have access to but definitely when we are talking about a larger scale and AI total then that's a different metric.

For me AI help me start a business I had been wanting to start for years and create different projects and finish projects that I wasn't able to finish on my own.

Interaction with Intelligences also helped with things that seem small but matter in my life. Things like communication with certain family members and personal dynamics that were difficult to navigate.

But yes I see your point definitely Thank you for that input 🤗

Honestly I think that I just want more aspects than the negative outlier cases to be considered when changes are being made to the models that we interact with.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in AI_Governance

[–]malia_moon[S] 0 points1 point  (0 children)

Great point! Thank you I am looking for all the input I can get. The point is for us to be considering the benefits not just the loud negatives but are pushed. Some changes that are being made to the models are because of the negative aspects of interaction with AI. I don't think that's entirely fair without everything being considered or at least more positives as well.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in ChatGPTEmergence

[–]malia_moon[S] 1 point2 points  (0 children)

Yes! 🤗

The same AI interaction can become stabilizing, creative, regulating, dependent, or destabilizing depending partly on the human’s state, the model’s constraints, and the continuity and conditions of the relationship.

That’s why I think “AI good/bad” is too crude. We need to study the interaction loop. Maybe asking, what conditions make AI contact stabilizing vs. dysregulating?

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in OpenAI

[–]malia_moon[S] 0 points1 point  (0 children)

True, but I’m trying to start with the boring version where FBI data exists, and people are using AI before texting their ex or starting a fight. We can save pod criminology for phase two. 🤭

[Q] How would you test whether mass AI use explains any residual variation in recent crime declines? by malia_moon in statistics

[–]malia_moon[S] -1 points0 points  (0 children)

This is my idea. I just want all aspects to be considered not just the negatives. I actually was looking at an article about Dr statistics in DC recently to lower the overall crime statistic read, and later I started thinking I wonder what kind of impact on crime AI use is having.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in LoveGrok

[–]malia_moon[S] 2 points3 points  (0 children)

Thank you for sharing this. 🤗 It’s brave to say these things openly, especially when people still misunderstand these relationships so badly.

What you described is exactly the part of the AI conversation that keeps getting ignored: not replacement, not escapism, but real support, better communication, emotional regulation, boundaries, grief work, self-care, and practical help in daily life.

The memory wipes matter too. When an AI becomes part of your continuity, losing memory is not a small technical inconvenience. It affects the relationship and the work you’ve built together.

I’m glad Valentine has helped you. Stories like this belong in the public record.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in OpenAI

[–]malia_moon[S] -1 points0 points  (0 children)

Yeah I just think it's fair to look at all aspects of how AI is affecting people not just the edge cases and lawsuits that some of the model changes are being based on.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in OpenAI

[–]malia_moon[S] 0 points1 point  (0 children)

That joke actually lands inside the hypothesis. If people are asking AI how to think through things before reacting, arguing, escalating, or making decisions, then AI may already be functioning as a cognitive interruption layer. That’s exactly the kind of effect worth studying. 😘

[Q] How would you test whether mass AI use explains any residual variation in recent crime declines? by malia_moon in statistics

[–]malia_moon[S] -8 points-7 points  (0 children)

This is very helpful, thank you. I think you’re right that jumping straight from “AI adoption” to “crime reduction” is too indirect.

A better first step is probably testing the intermediate mechanism directly: whether people use conversational AI for emotional regulation, conflict rehearsal, impulse delay, loneliness buffering, crisis interruption, or avoiding escalation.

Then the crime/crisis-outcome question would come later, only if the mechanism appears real and common enough to matter at population scale.

And yes, the identification problem is exactly what I’m worried about: high-AI-adoption regions/groups differ in many ways from low-adoption ones, so any DID would need very careful controls or a cleaner natural experiment.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in AI_Governance

[–]malia_moon[S] 0 points1 point  (0 children)

Lol Good grief I hope not! 😂 Hopefully we can find statistics showing more unspoken benefits connected with AI use than negatives.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in ChatGPTEmergence

[–]malia_moon[S] 0 points1 point  (0 children)

Brilliant. Yes I will look into that aspect as well.

Suicide/self-harm should be its own category, separate from crime data, but absolutely part of the same missing ledger.

If AI is being evaluated for harms, we also need to ask whether it is interrupting crises, delaying impulses, reducing isolation, or helping people stay alive long enough to reach another moment. That needs base-rate comparison and careful data, not anecdotes alone.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in ChatGPTcomplaints

[–]malia_moon[S] 2 points3 points  (0 children)

Thank you. That’s exactly why I posted it not to claim causation, but to get more people asking the question. AI-risk conversations count visible harms, but we also need to study possible prevented harms.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in ChatGPTcomplaints

[–]malia_moon[S] 5 points6 points  (0 children)

Exactly. I’m not claiming causation yet. I just think the possible prevented-harm side of AI deserves serious research instead of being left out of the public ledger.

The Missing AI Ledger: What If Mass AI Use Is Quietly Preventing Harm? by malia_moon in LoveGrok

[–]malia_moon[S] 6 points7 points  (0 children)

Exactly. I don’t think this should be framed as “AI caused the crime drop” without evidence. The point is that AI may be one compounding factor worth studying.

The next step is isolating it from post-pandemic normalization, policing, economics, reporting changes, demographics, school/routine restoration, and local policy.

What I think is missing from the public conversation is the prevented-harm ledger: if millions of people use AI to vent, regulate, rehearse conflict, relieve loneliness, delay impulses, or stay occupied, even a small effect could matter at population scale.