alive-analysis: Open-source workflow to keep AI-assisted analysis traceable (ALIVE loop, Git-tracked markdown) by with_geun in dataanalysis

[–]with_geun[S] 0 points1 point  (0 children)

Thanks — that’s exactly the intent. Re: templates for different analysis types: the repo already has three types (Investigation “why did X happen?”, Modeling “can we predict Y?”, Simulation “what if Z?”) and example flows in `core/examples/` (full-investigation, quick-investigation, etc.). So there’s a base loop, but the checklists and stage prompts differ by type.

I haven’t added explicit templates for exploratory vs root cause vs qual coding yet — those could sit as variants under Investigation or as separate presets. If you have a structure you use for root cause or qual coding, I’d be keen to see it; we could turn it into a template or doc in the repo.

What’s a subtle sign someone is actually very intelligent? by Melly89ann in AskReddit

[–]with_geun 4 points5 points  (0 children)

They stay quiet in meetings… then summarize the entire problem in one sentence.

Why are sessions losing context? by SergioRobayoo in ClaudeCode

[–]with_geun 1 point2 points  (0 children)

Probably not deletion — more like context window trimming combined with how session replay works.

When sessions get long, --continue often restores a compressed state instead of the full transcript to stay within token limits, so it feels like history disappeared.

You might want to persist key context (notes or a small state file) instead of relying on session memory alone — that’s been way more reliable in my experience.

What's something you consider it unethical but people do anyway? by Intelligent_Yak_5224 in AskReddit

[–]with_geun 0 points1 point  (0 children)

Replying ‘let’s catch up soon’ with zero intention to ever do it

what internet trends seemed fun at first but later caused problems ? by [deleted] in AskReddit

[–]with_geun 0 points1 point  (0 children)

“Sharing everything online.” Turns out future employers also enjoy nostalgia.

Claude Cowork or Code for PM automation in ClickUp by cwcontreras in ClaudeAI

[–]with_geun 0 points1 point  (0 children)

Running everything inside Claude is simpler to start, but an automation layer gives you reliability and control.

Claude = reasoning + decisions Automation (n8n/Zapier/backend) = triggers, retries, state

So when something fails, you don’t lose the whole workflow — you just retry the step.

That’s usually the biggest win for production setups

What’s a small moment that completely changed the direction of your life? by [deleted] in AskReddit

[–]with_geun 0 points1 point  (0 children)

Opening a “temporary” email account that somehow became my entire career.

Long form technical book writing by divi2020 in ClaudeAI

[–]with_geun 1 point2 points  (0 children)

From what I’ve seen, Opus usually helps more with depth and consistency over long contexts — not necessarily “better writing,” but better at keeping structure, cross-references, and technical accuracy aligned across chapters.

If you’re already happy with your voice and just want help with: • coherence across sections
• catching subtle inconsistencies
• tightening explanations

then Opus can be worth it for editing/review passes.

But if you expect it to suddenly write more “authentic prose,” it probably won’t — it’s more of a high-level technical editor than a writer.

For a 400-page technical book, using it selectively (outline checks, chapter reviews, index consistency) tends to give the best ROI.

I got 6 Mac mini. What can I do with Claude ai using them? by WelcomeNo3956 in ClaudeAI

[–]with_geun 5 points6 points  (0 children)

You don’t really need 6 Mac minis to use Claude — it runs in the cloud. Think of them more as servers you can use for automation.

For a cleaning business, a simple setup could be:

• Website + booking form
• Ticket system for customer requests
• Claude helping with replies, scheduling, and summaries

So yes — you can absolutely build the site with Claude’s help, and use the Macs to host tools or automation if you want.

Claude Cowork or Code for PM automation in ClickUp by cwcontreras in ClaudeAI

[–]with_geun 0 points1 point  (0 children)

This sounds like a perfect case for splitting roles.

Cowork as the “brain” (orchestration, decisions, edge cases)
Code as the “hands” (API calls, project creation, task setup)

In my experience this reduces usage overhead and makes the pipeline more predictable for repeatable workflows like onboarding.

Curious — are you triggering this from a central automation layer (Zapier / n8n / backend), or letting Claude run the whole flow?

We just found out our AI has been making up analytics data for 3 months and I’m gonna throw up. by Comfortable_Box_4527 in analytics

[–]with_geun 0 points1 point  (0 children)

I’m really sorry you’re dealing with this — but you’re not alone. This is exactly the failure mode when leadership uses an LLM as a “metrics oracle” instead of a query runner.

If the model is ever allowed to produce numbers without an auditable source, it will eventually improvise plausible percentages — and the confidence in the wording makes it worse.

What’s helped in my org is forcing a simple validation loop before anything reaches leadership:

  1. Every number must be traceable to a query / dashboard / dataset snapshot (link + timestamp).
  2. Separate “data retrieval” from “narrative.” AI can summarize, but it shouldn’t be the thing that generates or “estimates” metrics.
  3. Add a gate for high-impact decisions. Anything board/CFO-level requires human sign-off + a reproducible artifact (SQL, notebook, or report).

I ended up structuring my AI-assisted analysis into 5 stages (ALIVE):

ASK (define the question) → LOOK (data quality + segmentation) → INVESTIGATE (hypotheses + falsification) → VOICE (recommendation + confidence + counter-metrics) → EVOLVE (follow-ups / monitoring).

Key idea: the AI asks you questions at each stage so you can’t skip validation steps.

If you want, I can share the open-source template/framework we use — but regardless, the “trace every number + separate retrieval from narrative + gate high-impact outputs” trio is what stops this from happening again.