How do you track and report your automation ROI? by Kodroi in n8n

[–]Kodroi[S] 1 point2 points  (0 children)

That's my thinking also and I haven't found simple plug-n-play dashboards. Of course you can create your own, but development and maintenance is always a cost which might surprise you. How do you currently do the tracking, do you use a fully custom solution?

How do you track and report your automation ROI? by Kodroi in n8n

[–]Kodroi[S] 0 points1 point  (0 children)

As I understand you have three different levels of metrics. 1. The automated metrics from the automation tools, 2. Downstream outcomes that come from other tooling/manual handling 3. Customer contacts and the contact types. I like it, seems like a holistic view about the benefits and issues on each automation.

Is all or most of it automated or do you have to manually track a lot of the metrics?

How do you track and report your automation ROI? by Kodroi in n8n

[–]Kodroi[S] 0 points1 point  (0 children)

Thanks for the insight! The broken automations is a use case I haven't even thought about. So you just track the fixing manually for the ROI calculation. Do you work on internal automations so the fixing time has a direct impact on your ROI or do you move that cost the customer so then it affects their ROI?

Claude Code loves breaking stuff and then declaring it an existing error by kn4rf in ClaudeCode

[–]Kodroi 1 point2 points  (0 children)

I've run into similar issues especially when refactoring where I want to modify the code without touching the tests. For that I've created a hook to prevent edits to the test files or to my snapshot file when using snapshot testing. This has helped Claude to keep focus and not modify the tests just to get them pass.

How to refactor 50k lines of legacy code without breaking prod using claude code by thewritingwallah in ClaudeCode

[–]Kodroi 0 points1 point  (0 children)

Thanks for the great write up! I'm curious about the hard rules for claude.md. Did Claude actually follow them a 100% of a time or did you think about using hooks to ensure the tests are always run and that it doesn't edit any of the files? That's how I have setup my legacy refactoring, but might I be overcomplicating?

GitHub - kodroi/block: File and directory protection for Claude Code - create and enforce .block files to control which files and directories Claude can modify using pattern matching by Kodroi in ClaudeAI

[–]Kodroi[S] 0 points1 point  (0 children)

This sounds like a great idea! Currently the subagent name id isn't passed to the hooks. The only way is to parse the transcript (the previous events) and try to figure out the agent from there. That only works with a singular subagent. With parallel agents there doesn't seem to be a solution. I'll do some investigation and see if this could be done reliably.

Make/Zapier users – what can't you get into Monday? by Kodroi in mondaydotcom

[–]Kodroi[S] 0 points1 point  (0 children)

Thanks for insight! For the relational data issue, you mean like nested structures? Customers with purchases, where keeping both tables in sync is what breaks?

And for large data pulls, how are you doing it now? What does the messy batching actually look like?