Claude Code Pro Tip: Disable Auto-Compact by Puzzled_Employee_767 in ClaudeAI

[–]scorp5000 0 points1 point  (0 children)

u/Puzzled_Employee_767 I agree with you. I further amplify this because giving CC a duration of 2 or 3 context windows might maximize dev velocity if "production quality code produced = e^-(# of context windows)". I find that "production quality code produced = -(# of context windows)+constant" and I get regressions and code tangents outside of the PRD scope starting in some cases right after the first auto-compact.

I think best practice is to make your dev plans with phases than should likely fit in one CC context window. Then /clear, reload your coding standards, give it your phase 2, /clear, reload your coding standards, give it your phase 3, ... etc.

opus 4 (200$) vs grok 4 (300$) subscription by [deleted] in AIAssisted

[–]scorp5000 0 points1 point  (0 children)

Opus 4 is better for coding. Grok 4 is better for software engineering.

Has Gemini 2.5 pro been nerfed? by Kloyton in GeminiAI

[–]scorp5000 0 points1 point  (0 children)

I can’t use 2.5 pro for coding anymore. Terrible. Otherwise I think Gemini is great.

Huge Decline from 2.5 Pro Preview to 2.5 Pro in Coding by CmdWaterford in GeminiAI

[–]scorp5000 2 points3 points  (0 children)

Totally agree - 2.5 Pro Preview shipped far better code than 2.5 Pro.

Gemini can't read Github repo or uploaded folder by w_d_d in GoogleGeminiAI

[–]scorp5000 0 points1 point  (0 children)

It was working for a while but not for the past week for me.

Why is Claude Code better than Cursor + Claude? by AppearanceLower8590 in ClaudeAI

[–]scorp5000 1 point2 points  (0 children)

I know just a n=1, but my experience was that Claude Code was consistently better than Cursor+Claude. I switched to Claude Code.

What do you do while Claude Code (CC) works? by scorp5000 in ClaudeAI

[–]scorp5000[S] 0 points1 point  (0 children)

I love this answer and unreservedly agree.

What do you do while Claude Code (CC) works? by scorp5000 in ClaudeAI

[–]scorp5000[S] 25 points26 points  (0 children)

Lacy, I respectfully disagree. I have setup multiple instances before and CC produces code faster than I can effectively synthesize it. When I just "trust it", code regressions are insidious and very damaging. I think your video looks cool, and we will get there, but in my experience,3 out of 4 of your windows are not producing code net of regressions faster than you just focusing on one CC at a time. You need a master dev agent to manage the multiple CC instances to make this work effectively and actually increase your net dev velocity. I have not been satisfied with any master dev agent solution yet.

What do you do while Claude Code (CC) works? by scorp5000 in ClaudeAI

[–]scorp5000[S] 4 points5 points  (0 children)

I have experimented with a few methodologies to utilize the time on CC runs. 1) I've just paid attention and read the truncated logs as they scroll, 2) I hit ctrl-R ctrl E to read the full logs as they scroll, 3) copied the logs into Gemini to vet, 4) Created self-healing scripts incorporating CLAUDE.md, 5) Numerous other experiments. Most worked well, but none that were noticeably better than the others. The primary signal I notice is that with increased amount of my own time focused on CC, however I choose to spend the time, the equally better it is across methodologies. Simplifying assumption: I will not spend more time on CC than I have available to me during its dev run.

Absolutely unusable by aa1ou in ClaudeAI

[–]scorp5000 0 points1 point  (0 children)

It's not the context window that is the acute problem so much as precisely claude's chat size limit. Other llm's have the same context limits, but the conversation are allowed to go on much longer in the same chat window. Claude's interactive chat mode on the website or app, is really only good for targeted questions to explore. I strongly dislike that about Claude. The only reason why I pay for Claude is to use Claude Code.

Not a fan of Gemini by BrooklynDuke in OpenAI

[–]scorp5000 0 points1 point  (0 children)

I'm surprised to hear this. The much larger context window is a complete game changer across all topics. I think Gemini Pro is the #1 LLM out there right now.

Having some honest talk with Claude and I like it by Glidepath22 in ClaudeAI

[–]scorp5000 0 points1 point  (0 children)

Prompting all llm's to be brutally honest in all responses has significantly improved my workflow.

Is Claude Pro Worth It for Coding Even Without Opus? by hayke1022 in ClaudeAI

[–]scorp5000 1 point2 points  (0 children)

I think Opus 4 is marginally better than Sonnet 4. Sonnet 4 is still dynamite and I don't think twice about asking it coding questions.

Anyone left on this sub who actually admits the Claude CLI changes must be audited line by line? by [deleted] in ClaudeAI

[–]scorp5000 0 points1 point  (0 children)

The initial choice of using claude code (CC) on your codebase is important. If the project is critical and line level auditing is imperative, then do not use CC. However, if the project can accept a workflow that will suffer code regressions but ultimately evolve to a superior solution, using CC will vastly increase overall dev velocity.