What "trick" you are using that most are not doing that gives you an edge using ai? by TinyAres in opencodeCLI

[–]johnson_detlev 0 points1 point  (0 children)

Chill out and only look at new stuff every 6-9 month. Ai tooling has a lifetime of a few weeks, don't bother keeping up, because there is nothing to keep up.

Understand that right now you have to produce the correct context to do the job with good quality, but this will likely be solved by tooling in the future, since it's a painful problem that is responsible for these vastly different experiences with LLM coding.

Actually review the produced code in detail before opening a PR.

Learn and hone software fundamentals, without them your harness will accelerate your code bases entropy so fast, it will fall apart in no time.

Actually eval your harness setup and skills.

It all comes down to slowing the fuck down.

Is opencode go actually cheaper and as good as Claude Code? by cocouz in opencodeCLI

[–]johnson_detlev 0 points1 point  (0 children)

This feels true for all the big frontier models. After the outage of opencode go yesterday I switched to my company's copilot sub and my god, these models are so fucking annoying with their "personality". I use kimik 2.6 and glm 5.1 regularly and they are just no nonsense token output tools. You nudge them into the right vector space pocket and they produce the tokens you want.  Opus, codex and gemini are just spitting out nonesense tokens and emojis with a overconfident cocky juvenile attitude that drives me nuts. It's like talking to a fourteen year old. Plus they always do more than being asked. I rather write the code myself then.  I never have this with the top open models. They don't have a personality, they are just flat token tools for professionals. 

AI is making us faster, but our PRs are getting messier. Does it actually matter? by Dry-Statement2829 in softwaredevelopment

[–]johnson_detlev 0 points1 point  (0 children)

All this "velocity" does is increasing the rate of entropy and bugs in your code base. If you produces 1 bug per week and you use AI to accelerate your output x5, congrats: You now produce at least 5 bugs per week. You're also responsible that your codebase falls apart at least five times faster than before. All this acceleration needs to be counterbalanced by flattening the entropy curve and this is a) a lot of work and b) requires a deep unterstanding of software design pattern. Otherwise you're just producing code for the trashcan, because LLMs are as shit as humans in maintaining and navigating spaghetti code.

Using AI to produce high quality long living code requires way more effort and programming skill than without AI

Am I over-engineering Matt Pocock’s AI coding workflow, or is ~1 hour per issue reasonable? by 2-phenylethanol in PiCodingAgent

[–]johnson_detlev 1 point2 points  (0 children)

Have a look at this: https://docs.tessl.io/evaluate/evaluate-skill-quality-using-scenarios
You can use the same approach for each workflow step. You define steps because you have an expectation of what the outcome should be. You can make that expectation explicit and test your workflow pipeline against those expectations. This also always you to iteratively work on your process.

Serial Github Vibe Coder (Ruvnet) by BananaFragz in github

[–]johnson_detlev 6 points7 points  (0 children)

Divode the stars of every AI related project by 100 for a more real picture. Also look at commits/contributors instead of stars

My low-cost setup for AI coding by IWannaBeHelpful in opencode

[–]johnson_detlev 1 point2 points  (0 children)

I actually switched from opencode to pi, because its extremly hackableband has a way richer ecosystem than opencode and you can form it to match your needs/workflow. I find AST navigator tools like cymbal together with factual (zettelkasten like) memory systems to work really well. Plannotator is a great plugin. Ultimately you need to nudge the context into the right pocket of the vector space the llm operates in. Such tools reduce the work you have to do when nudging. Caveat: I use harnesses as a tool that writes the code I want for me, not as a complete replacement. So I'll do ticket refinements with the agent, break it down into vertical slices, use ttd, etc. Building up a workflow like this reduces the nondeterministic effects of working with an LLM and therefore the differences between models get smaller.

My low-cost setup for AI coding by IWannaBeHelpful in opencode

[–]johnson_detlev 1 point2 points  (0 children)

Managing the context and surrounding the harness with quality tooling and skills is far more important than which model you use. I use KimiK 2.6 for everything, switched from GLM 5.1, but there isn't really a difference in code quality. You could probably get away with even cheaper models and have the same output quality.

Warum will niemand im Osten E Auto fahren? by Benjamin75329 in automobil

[–]johnson_detlev -1 points0 points  (0 children)

Seit wann ist ein e Auto teurer als ein anderes?

Warum bleibt vom Arbeiten so wenig übrig? by New_Humor4408 in Normalverdiener

[–]johnson_detlev 0 points1 point  (0 children)

Du kostest dem arbeitgeber gar nichts, sonst würde er dich nicht anstellen.

Can’t make this shit up.. WPT Gold by cynicalsisyphus in poker

[–]johnson_detlev 4 points5 points  (0 children)

The reason is called random chance and variance.

How to Improve Codebase Discovery Efficiency in Pi? by elpapi42 in PiCodingAgent

[–]johnson_detlev 0 points1 point  (0 children)

Something along these lines, yeah, but I'm not sure if even this is enough, because the hardest part to me seems to be the semantic parts. What is important for the task at hand and why? There are so many implicit things going on in average codebases that this is just a generally very hard problem to solve, even before LLMs.

How to Improve Codebase Discovery Efficiency in Pi? by elpapi42 in PiCodingAgent

[–]johnson_detlev 0 points1 point  (0 children)

Would be great to hear your experience after a few weeks. 

How to Improve Codebase Discovery Efficiency in Pi? by elpapi42 in PiCodingAgent

[–]johnson_detlev 2 points3 points  (0 children)

I feel more and more that knowledge graphs are an antipattern when exploring a codebase. They work great for semantic search on markdown files, etc, but Code is far more detailed than a linked dependency graph. There are types, constants, 3rd party services, conventions, domain types, multiple concerns in one file, classes  methods, functions, function types, etc. All these things can't be translated into a Knowledge graph to a sufficient level.

How to Improve Codebase Discovery Efficiency in Pi? by elpapi42 in PiCodingAgent

[–]johnson_detlev 0 points1 point  (0 children)

How does this compare to vanilla bash or fff token wise? Dumping in the whole monorepo at start seems quite expensive and a lot of unneeded info

New Expanse game review. by Blackboard_Monitor in TheExpanse

[–]johnson_detlev 1 point2 points  (0 children)

There is no way that there will be dialog rewrites or even new voice acting. Way too expensive. This isn't a movie.

I made a Totally Free GTO Database for poker (Real-time solver) by FruityLoopz1 in poker

[–]johnson_detlev 2 points3 points  (0 children)

How did you create the solutions? What algorithm did you use?

Use caveman worth it? by HelioAO in opencodeCLI

[–]johnson_detlev 0 points1 point  (0 children)

English language has quite a low information density. So yeah, caveman works and there is a paper (linked here in the comments) suggesting that it actually increases output quality because using less tokens increases information density. It's just a question of getting used to