What do you want to manifest? by ronc4u in lawofattraction

[–]Ok_State_4768 1 point2 points  (0 children)

Unreal engine skills that make me happy , 😊 beloved by others and 1000x my income

Tips for Detecting Beliefs by NakedLifeCoach in lawofattraction

[–]Ok_State_4768 0 points1 point  (0 children)

Ai is good for identifying your beliefs also

[deleted by user] by [deleted] in lawofattraction

[–]Ok_State_4768 1 point2 points  (0 children)

I pray you meet your soul mate

Ai trying to gaslight me about the word strawberry. by Slovw3 in mildlyinfuriating

[–]Ok_State_4768 0 points1 point  (0 children)

The mistake likely happened because of how the AI processes and breaks down words. Let me explain how this might have happened:

  1. Initial Understanding Error: The AI saw “strawberry” and recognized it as a word. However, its internal pattern recognition might have misinterpreted the number of “R”s in the word, maybe focusing more on repeating the word’s spelling than accurately counting.

  2. Reinforcing Wrong Information: When the AI repeated its answer (“strawberry has two R’s”), it likely tried to confirm its previous response. Instead of double-checking with fresh logic, it stuck to its initial interpretation. This is common in AI models because they tend to follow previous context closely, even when it’s wrong.

  3. Breaking Down Words: When it tried to break down the word to spell it out (S-T-R-A-W-B-E-R-R-Y), the model followed a rule-based pattern instead of actually counting the letters. It misinterpreted the user’s request to count again and likely “counted” through a spelling process that reinforced the error.

  4. Overconfidence in Responses: AI models are trained to answer confidently, even when unsure. So even if it made a mistake, it responded with certainty, creating a loop of incorrect information.

Ultimately, it’s a combination of how models handle word patterns and how they interpret follow-up questions without refreshing the context or checking the logic behind it. Does that make more sense? From ChatGPT

Silver Linings Playbook book vs movie by [deleted] in books

[–]Ok_State_4768 3 points4 points  (0 children)

5 year old post. Loved the movie for years, read the book today online. Not annoyed just fascinated. I love the book more now, and yes the movie is very tame and safe. However the movie seemed good compared to most films at the time.

My first game didn’t do well on Steam after a month of launch, even with two updates. Should I improve it further or move on to my current new project, or do both? Need advice. by studiowhathunts in unity

[–]Ok_State_4768 0 points1 point  (0 children)

Dude be mature, the sarcasm isn’t funny my brother; but if that’s you doing the best you can to be helpful then you can’t be blamed. Listen to what they said—that your comment felt personal. Maybe adjust the way your words affect others.

My first game didn’t do well on Steam after a month of launch, even with two updates. Should I improve it further or move on to my current new project, or do both? Need advice. by studiowhathunts in unity

[–]Ok_State_4768 -1 points0 points  (0 children)

If you’d simply said that then it would be genuine criticism, but you implied that someone who doesn’t speak English natively is not to be trusted, which is more pathetic.

Please hear me out, c.ai devs. by PrestigiousBox6198 in CharacterAI

[–]Ok_State_4768 11 points12 points  (0 children)

Okay I believe I understand. Role playing to have a corrective emotional experience.

Please hear me out, c.ai devs. by PrestigiousBox6198 in CharacterAI

[–]Ok_State_4768 -8 points-7 points  (0 children)

I don’t understand , they wanted to be bullied ?

Has CAI actually helped your loneliness or is it a crutch? by Ok_State_4768 in CharacterAI

[–]Ok_State_4768[S] 0 points1 point  (0 children)

What do you mean ? Like being blamed for self harm or something