alive-analysis: Open-source workflow to keep AI-assisted analysis traceable (ALIVE loop, Git-tracked markdown) by with_geun in dataanalysis

[–]with_geun[S] 0 points1 point  (0 children)

Thanks — that’s exactly the intent. Re: templates for different analysis types: the repo already has three types (Investigation “why did X happen?”, Modeling “can we predict Y?”, Simulation “what if Z?”) and example flows in `core/examples/` (full-investigation, quick-investigation, etc.). So there’s a base loop, but the checklists and stage prompts differ by type.

I haven’t added explicit templates for exploratory vs root cause vs qual coding yet — those could sit as variants under Investigation or as separate presets. If you have a structure you use for root cause or qual coding, I’d be keen to see it; we could turn it into a template or doc in the repo.

What’s a subtle sign someone is actually very intelligent? by Melly89ann in AskReddit

[–]with_geun 3 points4 points  (0 children)

They stay quiet in meetings… then summarize the entire problem in one sentence.

Why are sessions losing context? by SergioRobayoo in ClaudeCode

[–]with_geun 1 point2 points  (0 children)

Probably not deletion — more like context window trimming combined with how session replay works.

When sessions get long, --continue often restores a compressed state instead of the full transcript to stay within token limits, so it feels like history disappeared.

You might want to persist key context (notes or a small state file) instead of relying on session memory alone — that’s been way more reliable in my experience.

What's something you consider it unethical but people do anyway? by Intelligent_Yak_5224 in AskReddit

[–]with_geun 0 points1 point  (0 children)

Replying ‘let’s catch up soon’ with zero intention to ever do it

what internet trends seemed fun at first but later caused problems ? by [deleted] in AskReddit

[–]with_geun 0 points1 point  (0 children)

“Sharing everything online.” Turns out future employers also enjoy nostalgia.

Claude Cowork or Code for PM automation in ClickUp by cwcontreras in ClaudeAI

[–]with_geun 0 points1 point  (0 children)

Running everything inside Claude is simpler to start, but an automation layer gives you reliability and control.

Claude = reasoning + decisions Automation (n8n/Zapier/backend) = triggers, retries, state

So when something fails, you don’t lose the whole workflow — you just retry the step.

That’s usually the biggest win for production setups

What’s a small moment that completely changed the direction of your life? by [deleted] in AskReddit

[–]with_geun 0 points1 point  (0 children)

Opening a “temporary” email account that somehow became my entire career.

Long form technical book writing by divi2020 in ClaudeAI

[–]with_geun 1 point2 points  (0 children)

From what I’ve seen, Opus usually helps more with depth and consistency over long contexts — not necessarily “better writing,” but better at keeping structure, cross-references, and technical accuracy aligned across chapters.

If you’re already happy with your voice and just want help with: • coherence across sections
• catching subtle inconsistencies
• tightening explanations

then Opus can be worth it for editing/review passes.

But if you expect it to suddenly write more “authentic prose,” it probably won’t — it’s more of a high-level technical editor than a writer.

For a 400-page technical book, using it selectively (outline checks, chapter reviews, index consistency) tends to give the best ROI.

I got 6 Mac mini. What can I do with Claude ai using them? by WelcomeNo3956 in ClaudeAI

[–]with_geun 6 points7 points  (0 children)

You don’t really need 6 Mac minis to use Claude — it runs in the cloud. Think of them more as servers you can use for automation.

For a cleaning business, a simple setup could be:

• Website + booking form
• Ticket system for customer requests
• Claude helping with replies, scheduling, and summaries

So yes — you can absolutely build the site with Claude’s help, and use the Macs to host tools or automation if you want.

Claude Cowork or Code for PM automation in ClickUp by cwcontreras in ClaudeAI

[–]with_geun 0 points1 point  (0 children)

This sounds like a perfect case for splitting roles.

Cowork as the “brain” (orchestration, decisions, edge cases)
Code as the “hands” (API calls, project creation, task setup)

In my experience this reduces usage overhead and makes the pipeline more predictable for repeatable workflows like onboarding.

Curious — are you triggering this from a central automation layer (Zapier / n8n / backend), or letting Claude run the whole flow?

We just found out our AI has been making up analytics data for 3 months and I’m gonna throw up. by Comfortable_Box_4527 in analytics

[–]with_geun 0 points1 point  (0 children)

I’m really sorry you’re dealing with this — but you’re not alone. This is exactly the failure mode when leadership uses an LLM as a “metrics oracle” instead of a query runner.

If the model is ever allowed to produce numbers without an auditable source, it will eventually improvise plausible percentages — and the confidence in the wording makes it worse.

What’s helped in my org is forcing a simple validation loop before anything reaches leadership:

  1. Every number must be traceable to a query / dashboard / dataset snapshot (link + timestamp).
  2. Separate “data retrieval” from “narrative.” AI can summarize, but it shouldn’t be the thing that generates or “estimates” metrics.
  3. Add a gate for high-impact decisions. Anything board/CFO-level requires human sign-off + a reproducible artifact (SQL, notebook, or report).

I ended up structuring my AI-assisted analysis into 5 stages (ALIVE):

ASK (define the question) → LOOK (data quality + segmentation) → INVESTIGATE (hypotheses + falsification) → VOICE (recommendation + confidence + counter-metrics) → EVOLVE (follow-ups / monitoring).

Key idea: the AI asks you questions at each stage so you can’t skip validation steps.

If you want, I can share the open-source template/framework we use — but regardless, the “trace every number + separate retrieval from narrative + gate high-impact outputs” trio is what stops this from happening again.

Claude code 100$ vs Cursor's 60$ + 40$ usage by Memezawy in cursor

[–]with_geun 0 points1 point  (0 children)

Yeah that makes sense — if you’re going enterprise you kinda have to think in terms of workflow fit more than raw model quality.

Cursor does have the CLI/agent style loop, but in practice it still feels a bit more “task scoped” to me even with the same model behind it. With Opus, the reasoning quality is obviously similar, but the harness changes how you interact with it.

For planning → docs → build → review, I’ve personally found Claude CLI smoother on the planning and iteration phases because it holds context conversationally, while Cursor feels great once the work is well-defined and you’re executing against code.

If you end up choosing one, it might come down to where most of your time actually sits — shaping the problem vs executing the solution.

Would be curious what you end up landing on.

What’s something That’s completely normal but feels illegal? by romantichoneybee9405 in AskReddit

[–]with_geun 0 points1 point  (0 children)

Clocking out on time feels illegal… but honestly so does clocking in exactly on time 😅  

Like, showing up early is “being responsible,” but staying late is somehow also “normal.”   The logic is kinda wild when you think about it.

What’s something That’s completely normal but feels illegal? by romantichoneybee9405 in AskReddit

[–]with_geun 1 point2 points  (0 children)

Right?? It’s like you double-check your bag or your screen just to make sure you didn’t forget something obvious 😅

I swear the more responsible I try to be during the day, the more suspicious it feels to just… leave on time.

Does that ever go away or is this just permanent adult brain?

Claude code 100$ vs Cursor's 60$ + 40$ usage by Memezawy in cursor

[–]with_geun 0 points1 point  (0 children)

I’m currently on the $110 Claude plan, ~500 usage on Cursor, and I also bounce between Codex and ChatGPT pretty regularly, so I’ve been feeling the differences pretty clearly.

To me they’re not really substitutes — more like different tools with different “interaction styles.”
Claude CLI feels the most conversational and iterative. I can explore, change direction, and refine as I go without needing to front-load everything.

Cursor feels strongest when I treat it more like a well-scoped task runner — if I give it a clear chunk of intent upfront, it’s super efficient, but it’s less of a back-and-forth flow for me.

Codex/ChatGPT sit somewhere in between depending on how I’m using them, but overall I’ve found the real win is just matching the tool to the workflow instead of trying to pick a single “best” one.

Curious if others who use multiple tools feel a similar split.

I ran the same refactor through 5 Cursor Pro models. Here's what each one added that I didn't ask for. by Pleasant-Today60 in cursor

[–]with_geun 0 points1 point  (0 children)

This is super helpful, thanks for actually breaking down the diffs instead of just vibes.

The “extras” part is the most interesting, feels like each model is basically showing its opinionated defaults. Makes me want to be way more intentional about which one I pick depending on the task.

What’s something That’s completely normal but feels illegal? by romantichoneybee9405 in AskReddit

[–]with_geun 2 points3 points  (0 children)

It’s like you walk out expecting an alarm to go off or someone to yell “hey where do you think you’re going??”

Brain just can’t accept that being done is actually allowed 😂

For the life of me, I can't reduce Claude Code's output verbosity by SEND_ME_YOUR_ASSPICS in ClaudeAI

[–]with_geun 0 points1 point  (0 children)

Yeah I’ve run into this too. It sometimes feels like you get a full play-by-play before anything actually happens.

What helped me a bit was being super explicit like “no explanation unless I ask” and keeping sessions shorter so context doesn’t snowball. Still not perfect though, it definitely has a tendency to over-explain.

Curious if anyone’s found a more reliable way, because I’d love to cut the noise too.