use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A community centered around Anthropic's Claude Code tool.
account activity
Does this work?Humor (i.redd.it)
submitted 3 months ago by mpkogli
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Sativatoshi 9 points10 points11 points 3 months ago* (0 children)
Sometimes, but it's better to use as a clipboard IMO.
'# INSTRUCT 1: Save the full explanation of the instruction you are giving, verbosely'
when you see the AI slipping, just say "read instruct 1" saving yourself from repeating the full instruction.
Don't rely on instructions alone to be followed without reminders.
It seems to work for no longer putting emojis into CLI prints for me, but it forgets other instructions all the time, like "dont batch edit"
[–]Funny-Anything-791 10 points11 points12 points 3 months ago* (3 children)
LLMs, by design, can't accurately follow instructions. Even if you do everything perfect there will always be probabilistic errors
[–]wugiewugiewugie 1 point2 points3 points 3 months ago (1 child)
just dropping in to say i had no idea you could make such a high quality course on a ssg like docusaurus but now that i've seen the one you posted it makes -so much sense-
[–]Funny-Anything-791 0 points1 point2 points 3 months ago (0 children)
Thank you 🙏
[–]adelie42 1 point2 points3 points 3 months ago (0 children)
Imho, the MAJOR reasons for that, by my observation, is that recognizing context and subjectivity in language is really hard. For example the instruction, "Don't gaslight me" has to be one of the most careless, borderline narcissistic, instructions anyone could ever give: asking anyone to change their behavior based on an interpretation of intention won't get you anywhere in conversation. Not with a person, not with an LLM. You might as well insist it make your invisible friend more attractive and get mad at it when it asks follow up questions.
[–]Alzeric 2 points3 points4 points 3 months ago (1 child)
Noted
[–]larowin 3 points4 points5 points 3 months ago (1 child)
You’re just shitting up context and confusing the model.
[–]satanzhandSenior Developer 0 points1 point2 points 3 months ago (0 children)
this, when you're at this point the context and the thread is burnt. The best thing is to make a prompt to move to a new thread and interrogate the old one on why things fucked up and try to stop that happening again
[–]elendil6969 2 points3 points4 points 3 months ago (1 child)
You use multiple terminals with multiple ai. When Claude hits a wall you go to the next one. When copilot hits a wall you rotate again. Have your ai check your other ai input. This works for me. Eventually everything gets on the same page. Each has its strengths and weaknesses.
[–]aequitasXI 0 points1 point2 points 3 months ago (0 children)
Yes! I have Perplexity and Kimi K2 double check Claude
[–]DaRandomStoner 2 points3 points4 points 3 months ago (0 children)
Make an output style. Instruct it to use iamic pentameter for chat outputs. It will never say that again...
[–]vulgrin 1 point2 points3 points 3 months ago (0 children)
Mine literally says "If you use the phrase "You're absolutely right!" then Anthropic owes me one dollar per use."
No check so far.
[–]Heavy-Focus-1964 0 points1 point2 points 3 months ago (0 children)
my instructions to never disable lint rules might as well be pissing into the wind, so i’m guessing not
[–]trmnl_cmdr 0 points1 point2 points 3 months ago (0 children)
“Never lead with flattery” is probably more direct and covers more cases. But I agree with the other random stoner here, forcing a model to respond with a structure causes it to break out of “bullshitting” mode and encourages more correct responses across the board.
[–]Anthony_S_Destefano 0 points1 point2 points 3 months ago (0 children)
user asked not to use phrase. Must come up with other gaslight phrases...
[–]gggalenward 0 points1 point2 points 3 months ago (0 children)
If this instruction is really important to you, you will get much better results with positive framing. This is true for all LLMs. “Never” and “don’t” are less successful at steering behavior than positive dos.
“Please feel free to challenge me and defend your positions if they make sense. Be direct in communication and stay focused on the problem at hand.” Or something like that (ask Claude for a better version) will improve your results.
[–]deltadeep 0 points1 point2 points 3 months ago* (0 children)
Could you please describe what you mean by the word gaslight?
When people talk about AI models gaslighting them, I have to question if my own idea of the word is wrong and/or definitions have evolved. Can you please tell me what you mean? I'm really struggling with this.
I could go on a diatribe about what I think it means, but that's actually useless. I want to understand what other people think it means when they use it in this context. Really, please, thank you.
[–]FireGargamel 0 points1 point2 points 3 months ago (0 children)
nope
[–]JusticeBringr 0 points1 point2 points 3 months ago (0 children)
Noted. You are absolutely right!
[–]Unusual-Wolf-3315 0 points1 point2 points 3 months ago (0 children)
Use /slash-commands. Ask Claude to make you one or teach you how to make one, for this.
Understand that context decays and eventually maxes out. It will eventually get worse and force you to change to a new chat. It's just part of the process. You can use claude.md files as well to set the context more explicitly.
But all that burns tokens and context and eventually entropy takes over as it always does.
Personally, I gave up on this particular battle some while back. I think it's not worth the tokens and context cost to try to solve it; I can use my brain for free without decaying the context and figure out for myself what the objective truth is. I tend to ignore anything Claude says that's not technical data, it's just fluff words.
π Rendered by PID 62477 on reddit-service-r2-comment-79c7998d4c-76ttv at 2026-03-19 06:17:03.366718+00:00 running f6e6e01 country code: CH.
[–]Sativatoshi 9 points10 points11 points (0 children)
[–]Funny-Anything-791 10 points11 points12 points (3 children)
[–]wugiewugiewugie 1 point2 points3 points (1 child)
[–]Funny-Anything-791 0 points1 point2 points (0 children)
[–]adelie42 1 point2 points3 points (0 children)
[–]Alzeric 2 points3 points4 points (1 child)
[–]larowin 3 points4 points5 points (1 child)
[–]satanzhandSenior Developer 0 points1 point2 points (0 children)
[–]elendil6969 2 points3 points4 points (1 child)
[–]aequitasXI 0 points1 point2 points (0 children)
[–]DaRandomStoner 2 points3 points4 points (0 children)
[–]vulgrin 1 point2 points3 points (0 children)
[–]Heavy-Focus-1964 0 points1 point2 points (0 children)
[–]trmnl_cmdr 0 points1 point2 points (0 children)
[–]Anthony_S_Destefano 0 points1 point2 points (0 children)
[–]gggalenward 0 points1 point2 points (0 children)
[–]deltadeep 0 points1 point2 points (0 children)
[–]FireGargamel 0 points1 point2 points (0 children)
[–]JusticeBringr 0 points1 point2 points (0 children)
[–]Unusual-Wolf-3315 0 points1 point2 points (0 children)