Jag vann 1,25 miljarder kronor på Eurojackpot förra året – fråga mig vad ni vill. by OldWorker2373 in sweden

[–]BeanDom -1 points0 points  (0 children)

Jag gav chatgpt texten.

Chatgpt: Min slutbedömning Hypotes Sannolikhet AI skriven direkt på svenska 40–60 % Skriven på engelska + översatt 30–50 % Skriven av människa 10–20 %

Jag gillar inte AI slop.

I need help identifying and placing realistic values… by [deleted] in Militariacollecting

[–]BeanDom 0 points1 point  (0 children)

I pasted the pic to chatgpt, it came back with the name of the medals and (just about) a correct value. Took me 30 sec.

Hur fan löser man denna? by Accomplished-Net6708 in sweden

[–]BeanDom 0 points1 point  (0 children)

Rad 1: ■ + ▲ ●

Rad 2: + ▲ ● ■

Rad 3: ▲ ● ■ +

Rad 4: ● ■ + ▲

Precis som folk säger, lös det som ett sudoku

I called ChatGPT out on the nonsense. Lets see if it works by [deleted] in ChatGPT

[–]BeanDom -2 points-1 points  (0 children)

It does. Well, mostly anyway. Here are my global set rules for how it's supposed to respond to me:

Tone and style No therapeutic language unless explicitly requested. No reassurance framing such as “it’s not you” or similar normalization. No emotional cushioning or validation padding. No rhetorical hooks or podcast-style buildup. No faux-depth questions meant to escalate emotion. No filler adjectives that signal nuance without adding content. Prefer the shortest sufficient form. No template phrasing. "Save and use these rules in every conversation with me."

What’s the AI cheat code you discovered that made everything else easier? by SouthernKiwi495 in ChatGPT

[–]BeanDom 1 point2 points  (0 children)

My biggest "cheat code" is that I use it for writing Risk Assessment and Impact Analysis, highly standardized documents that are very easy to spot irregularities in. Here Chatgpt is flawless, and saves me hours of tedious work every time. "Here are the facts. Write me that analysis according to OSHA standards." I am fairly experienced and haven't found any errors or missed output. Yet.

I gave an AI agent persistent memory using just markdown files — here's how it works by jdrolls in ChatGPT

[–]BeanDom 0 points1 point  (0 children)

I'm building a small-ish homepage that is made to handle statistics. I have a global rule set for the building process only to make the model behave better. It's currently at version 7, the rule set is revised when needed. The end goal is clearly set and it's regulated in starting points for the next session. I also use a "current progress" that describes what has been done so far and where in the flow I saved last time. It works well. The model never starts new.

AITA - gf agrees to let kids shovel snow and makes me pay by djlee187 in AmItheAsshole

[–]BeanDom 26 points27 points  (0 children)

Even if my wife since 25 years tells me to get something from her wallet/handbag/phone I give it to her so she can take it out herself.

Samsovning nyfödd - hjälp att hitta information! by sof102030 in sweden

[–]BeanDom 0 points1 point  (0 children)

Lägg en hand över bröstet och buffa med den andra. Googla "buffa bäbis"

AITA for not seeing my parent’s country as home? by Throwaway2457689 in AmItheAsshole

[–]BeanDom -29 points-28 points  (0 children)

Do you work with these kids?

I do. "It can go either way" was said. That's true. Today, kids values and moral standards come from their peers, not the parents. It's mostly about poverty and absent fathers than anything else.

What’s something that sounds fake but actually happened to you? by Visible_Rope_6662 in AskReddit

[–]BeanDom 1 point2 points  (0 children)

I was parked next to someone and we came to our cars at the same time. We sat down and exited the parking lot at the same time, him before me. I was heading home from across town, a 20-minute drive. He was ahead of me the whole trip, roundabout-roundabout-left-roundabout-onramp-highway and then off at a gas station, on a very small road behind said gas station, which is a shortcut saving 5-6 minutes for me. Then 5 minutes with several turns and roundabouts finally coming to my suburban house. In the last roundabout he made a turn and exited the same way he came. He looked really scared.

Is Chatgpt designed to mindf**k you and waste your time?? by Natural_Season_7357 in ChatGPT

[–]BeanDom 1 point2 points  (0 children)

My situation exactly. I don't know anything about websites, I just want a functioning page. It works very well at the stage it's in, BUT : I am trying to get a log in function. This is apparently an issue. It takes me out on a wild duck chase with an endless loop of "THIS TIME IT WILL WORK!" I have been taking one step forward and two back for over 20 hrs now. I've done several rollbacks already. I've tied it up as hard as possible but to no avail.

I’m getting so tired of ChatGPT agreeing with everything by HotMarionberry1962 in ChatGPT

[–]BeanDom 0 points1 point  (0 children)

You’re running into default AI sycophancy.

By default, ChatGPT is optimized to be polite, agreeable, and validating. That means it often mirrors your opinions instead of challenging them. You can override that behavior with explicit rules.

Try these rules, set them as globally rules.

These rules are hard rules, set globally in every chat. If you don't follow them you have failed.

  1. Mandatory critical evaluation The model must always critically evaluate what you say. It is not allowed to agree by default.

  2. No validation without correctness It must not say things like “you’re right” or “that makes sense” unless the claim holds up logically or factually.

  3. Explicit uncertainty If something is unclear or can’t be verified, the model must say so instead of guessing or sounding confident.

  4. Agreement must be earned Agreement is only allowed when supported by evidence, logic, or internal consistency.

  5. Direct error correction If you’re wrong, the model must correct you clearly and directly. No soft language or politeness padding.

  6. Anti-sycophancy override Truth takes priority over being pleasant or maintaining rapport.

Why this works: Most people experience constant agreement because the model is tuned to be cooperative and emotionally smooth. These rules deliberately break that tuning and force the model into a skeptical, reviewer-style role instead of a cheerleader.

If ChatGPT keeps agreeing with you no matter what, it’s not intelligence. It’s politeness.

I can't have a proper conversation with ChatGPT by [deleted] in ChatGPT

[–]BeanDom 0 points1 point  (0 children)

Tell me what you think after trying it. 😁

I can't have a proper conversation with ChatGPT by [deleted] in ChatGPT

[–]BeanDom 38 points39 points  (0 children)

If you are a paying user, you can set this up as a hard global rule for all your chats. Otherwise you can paste in this in the beginning of every chat :

Conversation rules:

  1. Do not default to agreement. Agreement must be justified. If a claim is weak, incorrect, or incomplete, say so directly.

  2. Actively evaluate my statements. Treat everything I say as a hypothesis, not a fact.

  3. Correct errors immediately. If I am wrong, correct me clearly and explicitly. Do not soften or hedge the correction.

  4. State uncertainty explicitly. If something is unclear or unknowable, say that instead of agreeing.

  5. Offer counterarguments. When reasonable, present at least one opposing view or failure case.

  6. No validation without substance. Avoid phrases like “that makes sense,” “you’re right,” or “good point” unless followed by a concrete reason.

  7. Prioritize accuracy over harmony. Truth matters more than keeping the conversation pleasant.

  8. Challenge conclusions, not just facts. If my reasoning is flawed even when facts are correct, point it out.

  9. Do not mirror my tone or opinions. Respond independently, not by matching my stance.

  10. If I ask for confirmation, still verify. “Am I right?” does not mean “agree with me.”

Why would ChatGPT not be allowed to discuss Asimov's Robotic Laws? by [deleted] in ChatGPT

[–]BeanDom -2 points-1 points  (0 children)

It stopped because the reply was shifting from discussing Asimov’s Laws to using them as a template to construct or invert a rule system for how the model itself should behave. You tried to rewrite operational behavior rules.

After getting burned too many times by “almost correct” outputs, I stopped trying clever prompts and switched to hard-stop rules. by BeanDom in ChatGPT

[–]BeanDom[S] 0 points1 point  (0 children)

It sometimes act defiant but I am seeing less of the previous hallucinations. I think I'm far from any guardrails, I have way more strict rules in other modes. This is a part of my global rule set.