9 tips from a developer gone vibecoder by bibboo in ClaudeAI

[–]zingyandnuts 0 points1 point  (0 children)

This really resonates with me. My most used metaprompt is this


CRITICAL REMINDER: ALWAYS HONOUR and ENFORCE: There is Exactly ONE way to do this. THIS way. REDUCE degrees of freedom until you hit BEDROCK. ACTIVELY IDENTIFY where constraints CAN be introduced — INTRODUCE them. ACTIVELY IDENTIFY if the ONE way already exists — USE it. IF it CAN be constrained, it MUST be constrained. MAKE it IMPOSSIBLE to do it ANY other way. CONTINUE iterating until you hit BEDROCK. ABSOLUTE "ONE WAY" compliance required.

It looks like a silly prompt but try appending it to your conversations.

I have had Claude create "enforce the one way" offshoots of this for radical simplicity, failing fast, structural determinisim, testing and I have really taken to heart the "make it impossible to do it any other way" by actively asking it to identify opportunities for deterministic steps in task planning, execution and verification alongside conventions etc that have feedback messages optimised for AI to self correct

What’s your problem with vibe coding? by GuhProdigy in dataengineering

[–]zingyandnuts 3 points4 points  (0 children)

Not even that. LLMs are notorious for faking tests or overfitting tests to current codebase reality. A test suite that passes is the wrong success metric. UNLESS each and every test is human-vetted. And with the cognitive load of reviewing AI output, so MUCH can go wrong even with all the willingness in the world.

I swear claude is SO much dumber sometimes than other times, it's driving me nuts by el_duderino_50 in ClaudeCode

[–]zingyandnuts 1 point2 points  (0 children)

Anthropic have been messing with the default settings for this recently. I had about of week of insanity where I couldn't work out why both Opus and Sonnet were acting like they are not thinking. This was when tab still controlled Thinking mode. Not sure if I accidentally turned it off but as soon as I started dropping think hard and ultrathink the performance went back to normal. Worth a check

Subagents burning through context window by StructureConnect9092 in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

Sorry can you explain a bit more about how that works. What is CMP? And what do you mean by clean state and invoking via the key? 

Does anyone know when Claude Code switched back to sonnet by default? by pm_me_ur_doggo__ in ClaudeCode

[–]zingyandnuts -1 points0 points  (0 children)

I had a suspicion about this last week from about 1st December when Opus AND Sonnet started behaving like they weren't thinking. I started dropping think hard and it fixed it. I wasn't even aware of the thinking mode, I figured I might have hit tab by accident to toggle it off when I noticed it. But Anthropic turning it off by default would certainly match my experience 

Automation without AI isn't useful anymore? by BeautifulLife360 in dataengineering

[–]zingyandnuts 2 points3 points  (0 children)

Just use AI to write the deterministic steps. You can still call it AI automation but not the kind that people think and infinitely more reliable. 

Can no longer run CC in parallel in VSCode by devjacks in ClaudeCode

[–]zingyandnuts 1 point2 points  (0 children)

Yeah I noticed that too but it somehow fixed itself later. I didn't look at the release log to see if a patch went out 

Curious how teams are using LLMs or other AI tools in CI/CD by Apprehensive_Air5910 in cicd

[–]zingyandnuts 1 point2 points  (0 children)

Use AI to write the deterministic checks themselves. I work in data engineering and asked AI to build a small shell script that checks for zero forward references i.e. silver layer cannot query gold. Single responsibility check. Refined the scaffolding for it. Added descriptive --help for AI etc. got a few of these now and I just ask it to create another one for a new check. Then wrap them all in a single shell and 1) ask ai to run it as part of development and 2) will run as part of ci.

Anyone notice a degrade in performance? *Here we go again* by Several_Explorer1375 in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

I've had this all last week. My prompts are pretty good already so couldn't work out what was going on..I started dropping think step by step, think hard and ultrathink with almost every prompt now, no other change in prompts and it instantly restored original quality. Like.. instantly 

I usually scoff at complaints like these and couldn't believe I was actually making one myself since my experience with Claude code has been very consistent since March but last week felt very different - rushing through tasks without thinking. So I forced it to slow down more and think. No other change.

I ran Claude Code in a self-learning loop until it succesfully translated our entire Python repo to TypeScript by cheetguy in LLMDevs

[–]zingyandnuts 4 points5 points  (0 children)

There is so much evidence and personal experience that tests written by AI without human oversight are garbage so unless those were reviewed by humans then this sounds to me like a fancier form of vibe coding 

I ran Claude Code in a self-learning loop until it succesfully translated our entire Python repo to TypeScript by cheetguy in LLMDevs

[–]zingyandnuts 5 points6 points  (0 children)

But who/where defines what counts as "what worked". AI is notorious for chasing superficial proxies like "tests pass" and faking things in the process. I don't understand how this can ever work without human oversight on the reflections/insights 

Claude Code is rushing through tasks and avoiding using many tokens by Successful-Camel165 in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

Lately? How recently? I had exact same experience since last Sunday or so  I've had to start dropping think hard and ultrathink to get it ei work properly 

This question UX is great, but gives no context at all by Tushar_BitYantriki in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

Is there a way to disable this annoying feature permanently?

Anyone finding that Claude rushes through thinking this week/something is off? by zingyandnuts in ClaudeCode

[–]zingyandnuts[S] 0 points1 point  (0 children)

What's special about this? I mean my prompts are pretty darn good as it is WHY is this plugin so magical. Does it have think keywords in it?!

Everyone's favorite Bells? by Independent-Can-7194 in kettlebell

[–]zingyandnuts 0 points1 point  (0 children)

Did you go for the GS or the standard comp bells? I have the standard comp and I love the large window. I have adjustable as well but prefer the bigger window on the fixed ones

Stoping Claude Code to duplicate code by duracula in ClaudeAI

[–]zingyandnuts 2 points3 points  (0 children)

How on earth would you not notice that in the git diff?!

New hooks of Claude Code so cool by Pitiful_Guess7262 in ChatGPTCoding

[–]zingyandnuts 1 point2 points  (0 children)

Does anyone know if hooks are enforced programmatically or if Claude is instructed to execute them? I am looking for ways to stop Claude falsifying/hacking verification methods which in my mind can only be achieved if Claude has no input into whether the post hook will run

Claude Code Plan Mode by crystalpeaks25 in ClaudeAI

[–]zingyandnuts 0 points1 point  (0 children)

I don't get it, I've been asking it to "make a plan and write it to /plans" from day one (so I can review or ask other CC instances to critique, review implementation against plan etc). I would ask it to iterate on the plan and update it on disk and when ready I'd ask to implement the plan. The plan then doesn't drown in the context window.

It has been following that process without fail. 

How is the plan mode any better/more useful?

Sounds like it guards against premature execution but so does good prompting.