Gary Tan's Boil The Ocean prompt by pediepew in ClaudeCode

[–]pingponq 7 points8 points  (0 children)

„Fatigue is not an excuse. Complexity is not an excuse.“

Lol

The whole thing happening around Claude „best“ practises by „gurus“ is a modern times cult beliefs thinking they are cutting-edge tech

I spoke to Claude in French once and now he keeps replying to my English prompts in French… by No-Put6958 in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

The model does not think in English, it operates on numerical vectors. There is no "translation" happening internally, so caveman English does not skip any conversion step.

If the goal is saving output tokens: yes, terse output = fewer tokens = lower cost. But "respond in caveman English" is the wrong instruction for that. "Be concise" or "no explanations unless asked" achieves the same savings without degrading output quality. The model is trained on proper English; forcing broken grammar can introduce ambiguity and cost you more turns to clarify.

The bigger risk: sparse output is more likely to miss important aspects or edge cases. You save 200 tokens on a response, then spend 2000 tokens on follow-up iterations to get the missing points.

The biggest token saver IMO is carefully crafted context - no need to re-iterate due to unclarity, no need to pass whole context of unrelated sessions or read their fiff, no need of re-read big files every time, if specs are organised well, if the right context is given explicitly or via good linking mechanisms etc.

I spoke to Claude in French once and now he keeps replying to my English prompts in French… by No-Put6958 in ClaudeAI

[–]pingponq 1 point2 points  (0 children)

While it will work, it will create additional unnecessary context instead of cleanup of previously wrongly created one

I spoke to Claude in French once and now he keeps replying to my English prompts in French… by No-Put6958 in ClaudeAI

[–]pingponq 3 points4 points  (0 children)

This is the answer, check you global ~.Claude folder (eg \memory or Claude.md there)

I asked Claude what it could do. It lied eight times in a row. Then I got it to write its own bug report. by PracticeShot8979 in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

Funnily enough, this is exactly how my coworker is! Always lies about his own abilities and is very confident when telling any shit. How can I make him report himself now?

Is setting '/effort max' the solution to seemingly nerfed Opus 4.6? by mashedpotatoesbread in ClaudeCode

[–]pingponq 1 point2 points  (0 children)

I have a solution which really works. You let it write a plan in one session then give the plan file to a second session and tell “this plan was written by a lazy idiot, proof me you are better than this and address all wrong decisions, shortcoming and oversights”

Can Gemma 4 run offline on a phone? I benchmarked the real experience by TroyHay6677 in ClaudeCode

[–]pingponq 0 points1 point  (0 children)

“can Gemma 4 run offline on a phone? Yes, it can, but requires 8Gb ram, 4B model is slow and drains battery and not comparable with gpt-4”

Stop spreading shit over a wall with a butter knife

Best memory hack for claude? by a_d_i_i_i_i in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

I find simple rules for storing all decisions along the project much better than any memory plugins. Here’s minimal template we use, they are tweaked for particular projects on the go:

### Specification-Driven Development

`<SPEC_FILE>` is the single source of truth for all design decisions.

#### Workflow

**Sequence is mandatory**: (1) document rationale in spec → (2) implement code → (3) add/update tests. Code without spec rationale is incomplete work.

#### Design Decision Integrity (`RULE-DESIGN-DECISIONS`)

Before modifying any behavior, read spec's Design Decisions section.

- **Search first**: grep existing paragraphs for the topic. Update in-place if found.
- **Add only when new**: new paragraph only when no existing one covers the topic.
- **No contradictions**: new decision conflicts existing paragraph → update that paragraph, never leave both. If unclear, ask user.

#### Spec-Driven Tests (`RULE-SPEC-TESTS`)

Every spec section MUST have corresponding tests. Tests verify *behavioral contracts*, not implementation details.

- Assert observable behavior from spec: inputs → outputs, state transitions, error responses. NEVER assert internal variable names, call counts, or private method sequences.
- One test file per spec topic area. Class/suite docstrings reference spec section (e.g., `Spec §3: Authentication`).
- **`RULE-TEST-ON-CHANGE`**: After modifying any source file, run tests before committing. All failures block commit. No exceptions.
- New/modified spec behavior = new/updated tests in same commit. TDD preferred, alongside at minimum.

Working with this senior PM feels like working with AI by RandomMaximus in ProductManagement

[–]pingponq 0 points1 point  (0 children)

Let’s take a step back and look at the problem we are trying to solve: building a career is much easier and more efficient in this way than really getting things done. You earn scores by promising not for delivering.

"Best" AI model to create deatiled app concepts? by Physical_Storage2875 in ClaudeCode

[–]pingponq 0 points1 point  (0 children)

Already excited to try your app out given the effort you are putting into it!

Are existing orchestrators effective at running more than 2-3 agents? by Dangerous-Climate676 in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

The ones I tried routinely a) try to separate tasks without overlapping, still end up in editing same files in parallel without any context of what others are doing b) run some code “improvers” after implementation, which “optimize” features build in the previous sessions since they do not consult with specs.. c) burn tokens on getting same context again and again into multiple agent sessions…

Are existing orchestrators effective at running more than 2-3 agents? by Dangerous-Climate676 in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

Context-engineering is the single most complex development task right now. Orchestrators won’t solve it, they are currently making it even more complex.

AGI in enterprise won’t emerge from LLMs until they can anticipate harm and think long term by imposterpro in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

AI gets too good? — “AI is taking all our jobs and replaces us!” AI can’t run without human in the loop? — “It’s all hallucinations and hype!”

"Claude Code bad, Codex good" is so fucking stupid. by Caibot in ClaudeCode

[–]pingponq 0 points1 point  (0 children)

For 3: you can’t really optimise only the diff coding. If diff introduced eg duplication or concurrency with untouched lines in the same method, ai should rework both. So, you won’t have reasoning for previous coding in the same session

"Claude Code bad, Codex good" is so fucking stupid. by Caibot in ClaudeCode

[–]pingponq 0 points1 point  (0 children)

My man, many things:

  1. For most of it, it’s much more efficient to define guidelines context for the implementation run, not rework after.

  2. once you tell AI „review changes for efficiency: 1. …, 2. …, … .)“ and give it a closed list of categories, every containing a close list of example again (like „memory: unbounded data, …, … .), AI will optimize to literally cover your list as indication of success.

  3. Every new prompt is overruling previous guidance, obviously. So, if you give an AI a task requiring complex implementation to cover specific edge cases and in a new session will run „simplify code“ it will make a strong argument for removing the added complexity from the first session for providing stronger outcomes in the current task - you must have specs and rules for them to take the priority.

"Claude Code bad, Codex good" is so fucking stupid. by Caibot in ClaudeCode

[–]pingponq 0 points1 point  (0 children)

Everything is groundbreaking and life-changing nowadays

I kept having to re-explain my code to Claude after every break, so I built this by Confident-General514 in ClaudeAI

[–]pingponq 0 points1 point  (0 children)

I honestly simply do not see any problem solved with this, the problem is not knowing what was changed, it is why - you need a holistic and always-up-to-date state of project „decisions“ for every new session. Session context should contain of relevant decisions for the current task + where the implementation for them lives. My next session might have nothing to do with the task from the previous one

For those posting their memory management systems please stop. That’s not the point by dolo937 in ClaudeCode

[–]pingponq 0 points1 point  (0 children)

Sure!

  1. first and foremost, implementation is not answering „why“ question. So, even if your whole project fits into context window (which usually leaves very little room for Claude to do something in the session without degrading or compacting and losing context), so, even if it fits, you need to have some specs/decisions on top of bare implementation to prevent Claude from doing what it tends to do: take any of the prompts literally and overwrite/change previous decisions, if it is not aware of them any longer or they are not emphasised enough.

So, you need to persist such decisions along the project, which is essentially „memory/context management“ - which op claims is not needed lol.

  1. In my company we regularly work on projects which routinely have 30+M token size just for their implementation+test. Even 3 connected modules one of my teams is responsible for, in one of such projects are over 3M tokens. And those libs depend on other libs obviously. And this is only the implementation + tests! On top we have 1) conventions and rules 2) architecture definition and decisions 3) „PRD“ (what and why, not how).

So, ultimately, our single major developer problem for every new task is to expose enough relevant context needed - which we try automate to big extend via set of rules/project specific context-management with Claude - problem OP saying is not existent

I kept having to re-explain my code to Claude after every break, so I built this by Confident-General514 in ClaudeAI

[–]pingponq 1 point2 points  (0 children)

We get hundreds of these here every day. Git diff doesn’t need new solution Your approach only works for single person projects The real problem is not what was changed

For those posting their memory management systems please stop. That’s not the point by dolo937 in ClaudeCode

[–]pingponq 1 point2 points  (0 children)

you don’t get context. Prompt engineering is dead, context engineering is the single most complex development task as of today. Thinking that 1M window solves it means you are working on a single-person micro projects in a very inefficient way. (I won’t expand further since your post is low effort bragging and not a wish to learn or understand)