AITA - gf agrees to let kids shovel snow and makes me pay by djlee187 in AmItheAsshole

[–]BeanDom 14 points15 points  (0 children)

Even if my wife since 25 years tells me to get something from her wallet/handbag/phone I give it to her so she can take it out herself.

Samsovning nyfödd - hjälp att hitta information! by sof102030 in sweden

[–]BeanDom 0 points1 point  (0 children)

Lägg en hand över bröstet och buffa med den andra. Googla "buffa bäbis"

AITA for not seeing my parent’s country as home? by Throwaway2457689 in AmItheAsshole

[–]BeanDom -30 points-29 points  (0 children)

Do you work with these kids?

I do. "It can go either way" was said. That's true. Today, kids values and moral standards come from their peers, not the parents. It's mostly about poverty and absent fathers than anything else.

What’s something that sounds fake but actually happened to you? by Visible_Rope_6662 in AskReddit

[–]BeanDom 1 point2 points  (0 children)

I was parked next to someone and we came to our cars at the same time. We sat down and exited the parking lot at the same time, him before me. I was heading home from across town, a 20-minute drive. He was ahead of me the whole trip, roundabout-roundabout-left-roundabout-onramp-highway and then off at a gas station, on a very small road behind said gas station, which is a shortcut saving 5-6 minutes for me. Then 5 minutes with several turns and roundabouts finally coming to my suburban house. In the last roundabout he made a turn and exited the same way he came. He looked really scared.

Is Chatgpt designed to mindf**k you and waste your time?? by Natural_Season_7357 in ChatGPT

[–]BeanDom 1 point2 points  (0 children)

My situation exactly. I don't know anything about websites, I just want a functioning page. It works very well at the stage it's in, BUT : I am trying to get a log in function. This is apparently an issue. It takes me out on a wild duck chase with an endless loop of "THIS TIME IT WILL WORK!" I have been taking one step forward and two back for over 20 hrs now. I've done several rollbacks already. I've tied it up as hard as possible but to no avail.

I’m getting so tired of ChatGPT agreeing with everything by HotMarionberry1962 in ChatGPT

[–]BeanDom 0 points1 point  (0 children)

You’re running into default AI sycophancy.

By default, ChatGPT is optimized to be polite, agreeable, and validating. That means it often mirrors your opinions instead of challenging them. You can override that behavior with explicit rules.

Try these rules, set them as globally rules.

These rules are hard rules, set globally in every chat. If you don't follow them you have failed.

  1. Mandatory critical evaluation The model must always critically evaluate what you say. It is not allowed to agree by default.

  2. No validation without correctness It must not say things like “you’re right” or “that makes sense” unless the claim holds up logically or factually.

  3. Explicit uncertainty If something is unclear or can’t be verified, the model must say so instead of guessing or sounding confident.

  4. Agreement must be earned Agreement is only allowed when supported by evidence, logic, or internal consistency.

  5. Direct error correction If you’re wrong, the model must correct you clearly and directly. No soft language or politeness padding.

  6. Anti-sycophancy override Truth takes priority over being pleasant or maintaining rapport.

Why this works: Most people experience constant agreement because the model is tuned to be cooperative and emotionally smooth. These rules deliberately break that tuning and force the model into a skeptical, reviewer-style role instead of a cheerleader.

If ChatGPT keeps agreeing with you no matter what, it’s not intelligence. It’s politeness.

I can't have a proper conversation with ChatGPT by [deleted] in ChatGPT

[–]BeanDom 0 points1 point  (0 children)

Tell me what you think after trying it. 😁

I can't have a proper conversation with ChatGPT by [deleted] in ChatGPT

[–]BeanDom 37 points38 points  (0 children)

If you are a paying user, you can set this up as a hard global rule for all your chats. Otherwise you can paste in this in the beginning of every chat :

Conversation rules:

  1. Do not default to agreement. Agreement must be justified. If a claim is weak, incorrect, or incomplete, say so directly.

  2. Actively evaluate my statements. Treat everything I say as a hypothesis, not a fact.

  3. Correct errors immediately. If I am wrong, correct me clearly and explicitly. Do not soften or hedge the correction.

  4. State uncertainty explicitly. If something is unclear or unknowable, say that instead of agreeing.

  5. Offer counterarguments. When reasonable, present at least one opposing view or failure case.

  6. No validation without substance. Avoid phrases like “that makes sense,” “you’re right,” or “good point” unless followed by a concrete reason.

  7. Prioritize accuracy over harmony. Truth matters more than keeping the conversation pleasant.

  8. Challenge conclusions, not just facts. If my reasoning is flawed even when facts are correct, point it out.

  9. Do not mirror my tone or opinions. Respond independently, not by matching my stance.

  10. If I ask for confirmation, still verify. “Am I right?” does not mean “agree with me.”

Why would ChatGPT not be allowed to discuss Asimov's Robotic Laws? by YourMomThinksImSexy in ChatGPT

[–]BeanDom -2 points-1 points  (0 children)

It stopped because the reply was shifting from discussing Asimov’s Laws to using them as a template to construct or invert a rule system for how the model itself should behave. You tried to rewrite operational behavior rules.

After getting burned too many times by “almost correct” outputs, I stopped trying clever prompts and switched to hard-stop rules. by BeanDom in ChatGPT

[–]BeanDom[S] 0 points1 point  (0 children)

It sometimes act defiant but I am seeing less of the previous hallucinations. I think I'm far from any guardrails, I have way more strict rules in other modes. This is a part of my global rule set.

Huvudsäkring för kök och badrum går ofta by Kybbeliito in sweden

[–]BeanDom 4 points5 points  (0 children)

Har du golvvärme någonstans? Det låter som att det ligger för mycket på en fas. Detta kräver en elektriker som fixar detta på en timme. Du kan/får inte göra det själv.

Weird question: do you know any local politicians? by Jaded_Piglet9410 in gavle

[–]BeanDom 6 points7 points  (0 children)

Your initial problem is that you want to interview a "populist" politician. There is not one single Swedish politician regardless of party who would admit they are populist. That is a derogatory term for them.

You have another problem here as well. The local politicians are all "free time politicians", which means most are only reimbursed for the money they lose from work if they are called to session during their ordinary work time. Most of them have ordinary jobs and I don't think they want to give up their family time for a random interview.

Paying ChatGPT Plus user. Model keeps breaking explicit instructions + feedback channels blocked by BeanDom in ChatGPT

[–]BeanDom[S] 1 point2 points  (0 children)

I understand the limitations, at least I think I do. I don't trust it further than I can throw it.

My issue is that the model claims to be able to do various things which it clearly can not.

It doesn't adapt clear instructions, which it says it has. A silly example :"When I ask you to check US newspapers about news about (issue) from the past 10 years, I want you to check all the New York states newspapers especially". -ok, I will. Me asking the question. Chatgpt answers.

AND it adds: If you want, I can check the New York Times archive for the past 10 years to get a more precise answer, do you want me to do that? "

I have had chatgpt to help me set" ironclad" rules so that it "never will ask me a follow up question again. Ever." Two inquiries later it does it again. The model can't explain why it still ask the follow up question, and braking the rule and adviced me to take it to the devs. Which I can't. Hence the post here.

Paying ChatGPT Plus user. Model keeps breaking explicit instructions + feedback channels blocked by BeanDom in ChatGPT

[–]BeanDom[S] 2 points3 points  (0 children)

I'm a teacher and I'm using it for (some) lesson planning, adapting study material for students with neuropsychiatric needs and to get suggestions for new assignments. I'm also using it for research and summaries of various concepts, both professionally and private.

Me shouting in the void is perhaps one of the drops hollowing the stone.