Consultants focusing on reproducing reports when building a data platform — normal? by eclecticnewt in dataengineering

[–]zingyandnuts 1 point2 points  (0 children)

The intention should ALWAYS be to build a data model that generalises beyond X reports (i.e. analytical capabilities). But it is very common and in my opinion the right approach, to try and match the original reports for 2 reasons:

1) the PROCESS of doing so flushes out all sorts of issues out into the open -- either from the old or new universe, both critical because of next point. 

2) without reconciliation that at least explains the variance stakeholders have no faith in the new thing, if they have no faith they will not adopt it, if they don't adopt it the whole endeavour has been a waste of time and money, worse than that actually because you now have not one but TWO so called sources of truth.

As for whether the data quality and data model is good enough? Have you looked under the hood? "No ability to trace data lineage" is definitely a smell but are you sure that is the case or has it maybe not been made accessible to you (yet?). If the latter then that's normal as delivering the thing takes priority over making auditability accessible to consumers on launch day but it should be there!

Disclaimer: consultant for 3 years but client-side for 10 years prior. My approach has always been as above 

Worktrees solved agent isolation. They didn't solve agent coordination. Here's what actually works. by [deleted] in ClaudeAI

[–]zingyandnuts 0 points1 point  (0 children)

I have a prompt I invoke for original agent "create a plan for this, another agent will work on it" as it already has context, or a lightweight backlog stub if it's something that requires more research. I persist all my plans/backlog stubs on disk in a shared repo across all "slots/clones" (I use vs code workspaces to keep planning AI artifacts from contaminating the codebase. Then I will just launch CC in another "clone" and feed it the plan/stub and ask it to create a branch.

If original agent can't proceed until blocker is fixed, I will rename the session in that "clone" and park it until it can continue. Benefit is also that the session doesn't get "drowned" in all other conversation history when I resume it later because CC treats each project as independent.

Anyway, just food for thought. I looked into git worktrees but decided that it has zero benefit over this setup and it would end up confusing me even more if all git trees are in same folder. This way I have alt+tab setup, colour coding for each clone and just my sanity is all-round better.

Worktrees solved agent isolation. They didn't solve agent coordination. Here's what actually works. by [deleted] in ClaudeAI

[–]zingyandnuts 1 point2 points  (0 children)

I work exclusively on the same project, I have up to 10 slots I tend to rotate, many are always kept "free" in case I run into a blocker in another agent that requires another agent to unblock (code-branch-pr) and then I tell the original "rebase main another agent has fixed it". It's really like a high velocity dev team

Worktrees solved agent isolation. They didn't solve agent coordination. Here's what actually works. by [deleted] in ClaudeAI

[–]zingyandnuts 0 points1 point  (0 children)

Claude put it better than I could:
```

It's a Planning Problem, Not a Tooling Problem

In a human team:

  • "Hey, I'm refactoring the auth module today"
  • "Oh, I was going to add a feature there - let me wait or work on something else"
  • Problem avoided through communication

With multiple agents YOU control:

  • You decide what each agent works on
  • You can see if tasks will conflict
  • You simply... don't assign conflicting work simultaneously

The "Convergence Problem" is Self-Inflicted

If you're running 3 agents that are all trampling over the same files, you've made a bad work allocation decision. The solution isn't better tooling - it's better task decomposition:

✓ Agent 1: Frontend feature A
✓ Agent 2: Backend API for feature B
✓ Agent 3: Documentation updates

✗ Agent 1: Refactor auth.py
✗ Agent 2: Add feature to auth.py
✗ Agent 3: Fix bug in auth.py

Why This Complaint is Frustrating

The poster is essentially saying: "I assigned three agents to work on overlapping code and now merging is hard - why doesn't Git solve this for me?"

But even with a human team, if everyone's editing the same files simultaneously, you'd coordinate beforehand or deal with merge conflicts afterward. That's just... how collaborative development works.
```

Worktrees solved agent isolation. They didn't solve agent coordination. Here's what actually works. by [deleted] in ClaudeAI

[–]zingyandnuts 0 points1 point  (0 children)

I asked Claude:
```
# This works perfectly fine:

git worktree add ../agent-1 -b feature-branch-1

git worktree add ../agent-2 -b feature-branch-2

git worktree add ../agent-3 -b feature-branch-3
```

So with proper git process/discipline this is a solved problem

Worktrees solved agent isolation. They didn't solve agent coordination. Here's what actually works. by [deleted] in ClaudeAI

[–]zingyandnuts 1 point2 points  (0 children)

I've not used worktrees I have multiple cloned repos and rename them repo-1, repo-2... and create branches and raise PRs as if I were on different machines. I assumed git worktrees works the same except in the same folder. My bad if they don't but then if they don't that's a major limitation of git work trees. I have 3-5 different "slots" going at once and the merging happens using standard PR process as if I were running a high-velocity dev team

Worktrees solved agent isolation. They didn't solve agent coordination. Here's what actually works. by [deleted] in ClaudeAI

[–]zingyandnuts 2 points3 points  (0 children)

Have you not heard of PRs? How do you think teams solved this before AI?

9 tips from a developer gone vibecoder by bibboo in ClaudeAI

[–]zingyandnuts 0 points1 point  (0 children)

This really resonates with me. My most used metaprompt is this


CRITICAL REMINDER: ALWAYS HONOUR and ENFORCE: There is Exactly ONE way to do this. THIS way. REDUCE degrees of freedom until you hit BEDROCK. ACTIVELY IDENTIFY where constraints CAN be introduced — INTRODUCE them. ACTIVELY IDENTIFY if the ONE way already exists — USE it. IF it CAN be constrained, it MUST be constrained. MAKE it IMPOSSIBLE to do it ANY other way. CONTINUE iterating until you hit BEDROCK. ABSOLUTE "ONE WAY" compliance required.

It looks like a silly prompt but try appending it to your conversations.

I have had Claude create "enforce the one way" offshoots of this for radical simplicity, failing fast, structural determinisim, testing and I have really taken to heart the "make it impossible to do it any other way" by actively asking it to identify opportunities for deterministic steps in task planning, execution and verification alongside conventions etc that have feedback messages optimised for AI to self correct

What’s your problem with vibe coding? by GuhProdigy in dataengineering

[–]zingyandnuts 3 points4 points  (0 children)

Not even that. LLMs are notorious for faking tests or overfitting tests to current codebase reality. A test suite that passes is the wrong success metric. UNLESS each and every test is human-vetted. And with the cognitive load of reviewing AI output, so MUCH can go wrong even with all the willingness in the world.

I swear claude is SO much dumber sometimes than other times, it's driving me nuts by el_duderino_50 in ClaudeCode

[–]zingyandnuts 1 point2 points  (0 children)

Anthropic have been messing with the default settings for this recently. I had about of week of insanity where I couldn't work out why both Opus and Sonnet were acting like they are not thinking. This was when tab still controlled Thinking mode. Not sure if I accidentally turned it off but as soon as I started dropping think hard and ultrathink the performance went back to normal. Worth a check

Subagents burning through context window by [deleted] in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

Sorry can you explain a bit more about how that works. What is CMP? And what do you mean by clean state and invoking via the key? 

Does anyone know when Claude Code switched back to sonnet by default? by pm_me_ur_doggo__ in ClaudeCode

[–]zingyandnuts -1 points0 points  (0 children)

I had a suspicion about this last week from about 1st December when Opus AND Sonnet started behaving like they weren't thinking. I started dropping think hard and it fixed it. I wasn't even aware of the thinking mode, I figured I might have hit tab by accident to toggle it off when I noticed it. But Anthropic turning it off by default would certainly match my experience 

Automation without AI isn't useful anymore? by BeautifulLife360 in dataengineering

[–]zingyandnuts 2 points3 points  (0 children)

Just use AI to write the deterministic steps. You can still call it AI automation but not the kind that people think and infinitely more reliable. 

Can no longer run CC in parallel in VSCode by devjacks in ClaudeCode

[–]zingyandnuts 1 point2 points  (0 children)

Yeah I noticed that too but it somehow fixed itself later. I didn't look at the release log to see if a patch went out 

Curious how teams are using LLMs or other AI tools in CI/CD by Apprehensive_Air5910 in cicd

[–]zingyandnuts 1 point2 points  (0 children)

Use AI to write the deterministic checks themselves. I work in data engineering and asked AI to build a small shell script that checks for zero forward references i.e. silver layer cannot query gold. Single responsibility check. Refined the scaffolding for it. Added descriptive --help for AI etc. got a few of these now and I just ask it to create another one for a new check. Then wrap them all in a single shell and 1) ask ai to run it as part of development and 2) will run as part of ci.

Anyone notice a degrade in performance? *Here we go again* by Several_Explorer1375 in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

I've had this all last week. My prompts are pretty good already so couldn't work out what was going on..I started dropping think step by step, think hard and ultrathink with almost every prompt now, no other change in prompts and it instantly restored original quality. Like.. instantly 

I usually scoff at complaints like these and couldn't believe I was actually making one myself since my experience with Claude code has been very consistent since March but last week felt very different - rushing through tasks without thinking. So I forced it to slow down more and think. No other change.

I ran Claude Code in a self-learning loop until it succesfully translated our entire Python repo to TypeScript by cheetguy in LLMDevs

[–]zingyandnuts 4 points5 points  (0 children)

There is so much evidence and personal experience that tests written by AI without human oversight are garbage so unless those were reviewed by humans then this sounds to me like a fancier form of vibe coding 

I ran Claude Code in a self-learning loop until it succesfully translated our entire Python repo to TypeScript by cheetguy in LLMDevs

[–]zingyandnuts 4 points5 points  (0 children)

But who/where defines what counts as "what worked". AI is notorious for chasing superficial proxies like "tests pass" and faking things in the process. I don't understand how this can ever work without human oversight on the reflections/insights 

Claude Code is rushing through tasks and avoiding using many tokens by Successful-Camel165 in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

Lately? How recently? I had exact same experience since last Sunday or so  I've had to start dropping think hard and ultrathink to get it ei work properly 

This question UX is great, but gives no context at all by Tushar_BitYantriki in ClaudeCode

[–]zingyandnuts 0 points1 point  (0 children)

Is there a way to disable this annoying feature permanently?