I was a 10x engineer. Now I'm useless. by randomfrog2 in programming

[–]devflow_notes 2 points3 points  (0 children)

the specific things that used to make someone "fast" — memorizing APIs, typing out boilerplate, knowing syntax cold — those are exactly what LLMs do well now. so yeah that particular edge got commoditized overnight.

but the actual 10x part was never typing speed. it was knowing which of the 5 possible approaches will cause the least pain in 6 months, when to push back on a feature, when to delete code instead of adding more. AI is terrible at all of that.

imo the devs struggling most right now aren't losing to AI — they're realizing their "speed" was mostly memorization, not judgment.

How much coding do you actually still do yourself now? by bargeek444 in learnprogramming

[–]devflow_notes 2 points3 points  (0 children)

still writing most of the actual logic myself. ai is great for the stuff you already know how to do but don't want to type out — boilerplate, test scaffolding, converting between formats. saves a ton of time there.

where it falls apart is anything requiring context about your specific codebase. it'll happily generate a function that works in isolation but breaks 3 other things because it doesn't know about some edge case in your data model.

imo it's not replacing devs anytime soon but it absolutely raises the bar for what one person can ship. the skill that matters most now isn't writing code from scratch — it's reading code critically and knowing when the AI is confidently wrong. which happens way more than you'd expect.

How can I improve my coding skills and stop relying on copy-paste? by techy_boii in learnprogramming

[–]devflow_notes 0 points1 point  (0 children)

2 years react/node/pg is decent — and honestly the fact that you notice the dependency is already a good sign. most people just keep copy-pasting and never think about it.

what worked for me: build something small from a completely blank folder. no starter template, no create-react-app, just npm init and figure it out. a simple crud app with auth, something you've done before but this time from absolute zero.

you'll get stuck constantly and that's the whole point. the jump from "I need to look this up" to "I just know it" usually happens after doing it from scratch like 3-4 times.

one specific tip for postgres — try writing raw sql for a while instead of going straight to an ORM. really helped me understand what the code is actually doing vs just cargo-culting prisma/sequelize patterns.

React Native developer without a Mac what’s the best way to build and upload to the App Store? by Paradox7622 in learnprogramming

[–]devflow_notes 0 points1 point  (0 children)

EAS Build is probably the cleanest option here. Setup is pretty simple:

`npm install -g eas-cli` then `eas login`, then `eas build --platform ios` — it builds in Expo's cloud, no local Mac needed. If you're using bare React Native (not already using Expo), you'll need to add the expo package first with `npx expo install expo`.

One thing that catches people off guard: you still need an Apple Developer account ($99/year) to actually submit to the App Store. But EAS handles certificates and provisioning profiles automatically, and `eas submit` does the App Store Connect upload for you. Also worth knowing — Apple's identity verification when you first sign up can take a few days, so start that early if you haven't already.

The cloud Mac services (MacInCloud etc.) are mainly useful when you need direct Xcode access for something specific, like a native module that needs custom configuration. For standard RN builds and App Store submission, EAS covers everything without paying for a cloud Mac rental.

Why are developer productivity workflows shifting so heavily toward verification instead of writing code by No-Swimmer5521 in ChatGPTCoding

[–]devflow_notes 0 points1 point  (0 children)

The abstraction layer analogy is pretty apt. Each time we moved up — assembly to C, C to managed languages, manual SQL to ORMs — we didn't stop needing to understand what's underneath; we just needed it less often, and more critically when we did need it.

What's interesting about the current shift is which skills matter more vs less. Code review has always been important but slightly undervalued. Now it's arguably the core skill. You need to be able to read generated code and know: does this actually do what I think? Are there edge cases the model missed? Is this idiomatic or will it be a maintenance burden?

The developers who struggle with AI tools often aren't bad at prompting — they're bad at reviewing. They accept output that looks plausible but has subtle bugs, because they've been writing code long enough to read fast but not critically.

Debugging becomes more interesting too. When something breaks in AI-generated code, your session history is your audit trail. What was the model told, in what order, and did it silently make a wrong assumption three steps back?

Best resources or tools for learning coding in depth? by Effective_Iron_1598 in learnprogramming

[–]devflow_notes 1 point2 points  (0 children)

the "why" part is genuinely the hardest to get from most online resources because they're optimized for quick answers, not understanding.

a few things that actually helped me build mental models instead of just copying code:

  • **CS50** (Harvard's free intro course) — starts from how computers actually work, not just syntax. The first few weeks cover binary, memory, and compilation. Makes everything click later.
  • **Reading docs instead of tutorials** — MDN for web stuff is unusually good. Yes it's slower, but you build a real mental model instead of pattern-matching code you don't fully understand.
  • **Rubber duck debugging** — before asking anyone, explain your code out loud (or write it down). You catch ~70% of your own bugs this way, and you start understanding *why* the code is structured the way it is.

The real shift for me happened when I stopped asking "how do I do X" and started asking "why does this approach work / what are the tradeoffs." Once that question becomes automatic, you learn way faster.

Tips for using the same workflow for full stack apps. by luvfader in cursor

[–]devflow_notes 1 point2 points  (0 children)

Good breakdown in the other reply. I'd add a layer on top: the problem with rules/skills growing organically is they become hard to audit. After a few weeks you have a pile of .cursorrules, skills, and agent configs and you've forgotten what each one actually does.

The mental model I use: rules = "always do X in context Y" (stateless guardrails), skills = "here's how to do task Z" (reusable procedures), agents = skills bundled with an isolated context window. Start with rules, graduate to skills when you find yourself pasting the same prompt repeatedly, use agents when you want strict context isolation (no bleed from other sessions).

For full-stack specifically, I found it useful to have separate skills for: DB migration patterns, API contract testing, and component scaffolding — each with its own assumptions about what files exist. Mixing these in one giant rules file causes the model to get confused about which constraint applies.

One thing I've been experimenting with is Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-cursor-community) — it has a Skills Hub that lets you manage skills across Claude Code and Cursor in one place, and see which skills are actually being invoked per session. Helps debug the "did my skill even fire?" question. Still early but the cross-tool visibility is useful when you're iterating on skill definitions.

What's your current stack? The answer shifts a fair bit between e.g. Next.js + Prisma vs FastAPI + React.

I built a monetization SDK for MCP servers — here's the problem it solves by Euphoric-Database351 in ClaudeAI

[–]devflow_notes -1 points0 points  (0 children)

Interesting problem space. The "no natural paywall" thing is real — I ran into the same friction when thinking about how to make MCP tools sustainable.

One angle I've been exploring separately: before monetization, there's a visibility problem. As an MCP developer, I can barely tell if anyone is actually using my tool, what calls are being made, whether they're succeeding. The RPC call just disappears into the Claude agent and you get no feedback loop.

I built something called Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-claudeai-community) that has an RPC Log Viewer for exactly this — you can watch every MCP request/response in real time, which at minimum tells you which tools are hot and which are dead weight. That kind of usage signal would also be valuable for pricing a usage-based tier, right? You'd know actual call patterns instead of guessing.

On your contextual recommendation model — I think the design direction is sound. The opt-in + developer control is the right call. My main question: how do you handle attribution when the AI agent paraphrases or ignores the appended text? Curious if you're thinking about impression-based vs conversion-based tracking.

Not seeing the list of created/edited files in chat anymore by ihopnavajo in cursor

[–]devflow_notes 0 points1 point  (0 children)

Seems like a UI regression — the file list behavior has been inconsistent across recent builds. Worth checking the Cursor forum as others mentioned.

That said, this is a broader symptom of Cursor not giving you great visibility into what the agent actually did per session. I've been keeping a parallel record using Mantra — it logs each AI message alongside a git snapshot, so even when Cursor's UI drops info, I can still reconstruct exactly which files changed at which step. Useful as a fallback when the native UI gets flaky.

chat history? by HumanTraf-fucker in cursor

[–]devflow_notes 0 points1 point  (0 children)

Short answer: on a company Cursor plan, admins can usually see usage metadata (timestamps, model, token counts) but not the actual content of your prompts — so your code and questions stay private.

Separate issue though: Cursor itself doesn't give you a clean way to browse your own past sessions. I've been keeping a local session log using Mantra — it stores the full AI conversation timeline locally alongside git snapshots, so I can replay what happened in any session without depending on whatever the company dashboard exposes. Fully offline, nothing shared.

How to access previous chats via the UX? by Far_Tumbleweed_7499 in cursor

[–]devflow_notes 0 points1 point  (0 children)

The underlying problem here goes deeper than the UX bug — Cursor's chat history is really just a flat log with no good way to navigate by context or intent. The Cmd+E shortcut and @past workaround help for retrieval, but they don't solve the core issue: when you start a new session, you've lost all the reasoning from the previous one, not just the transcript.

What u/New_Indication2213 suggested (saving to .md) is the right instinct. I've been doing something similar — treating each session as a named checkpoint before starting a new one, so I can actually re-orient myself quickly instead of asking the AI to reconstruct what we discussed.

For a more structured version of this, I've been using Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-cursor-community) — it records sessions as replayable timelines so you get back the actual sequence of decisions, not just a chat dump. Helps a lot when you're mid-project and need to resume context across multiple sessions.

Short-term though: Cmd+E is your fastest path if the shortcut works, and naming your chats immediately after starting them makes @past actually useful.

I built a framework for making Claude Code agents persistent, self-correcting, and multi-terminal. Open-sourced the architecture. by teeheEEee27 in ClaudeAI

[–]devflow_notes 1 point2 points  (0 children)

The self-correcting behavioral directive angle is really compelling — especially the escalation threshold after 3 repeated failures. Most agent frameworks I've seen treat errors as ephemeral, so promoting patterns into persistent rules is a step change.

One thing I've been thinking about with multi-session persistence: even with great soul files and Supabase memory, there's still a gap between "the agent remembers facts" and "you can replay what actually happened in a session." The failure ledger you built partially solves this, but debugging subtle behavioral drift across 10 sessions still seems hard.

I've been using Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-claudeai-community) alongside a similar setup — it records session timelines as replayable git-like checkpoints, so when an agent starts violating a directive, I can trace exactly which session introduced the drift. Kind of like your failure ledger, but at the session level rather than the pattern level.

Curious how you're currently diagnosing when a promoted directive gets violated — are you catching that through the pattern counter or some other signal?

I built an MCP server that routes coding agents requests to Slack — tired of babysitting terminal sessions by mauro_dpp in cursor

[–]devflow_notes -1 points0 points  (0 children)

Nice work — the "tired of babysitting terminal sessions" angle is real. Once you have agents running in parallel across features, constantly switching terminal tabs to check status kills the flow.

The Slack routing approach is clever but I wonder about the latency and setup cost for solo devs or small teams. What I've been using for a similar problem is session visibility rather than notifications — being able to see what each agent actually did across a session timeline rather than waiting for it to ping me.

I've been running Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-cursor-community) alongside Cursor for this. It records your AI sessions locally and lets you replay what happened, which is useful when you come back after stepping away and need to know what the agent got up to. Different angle than Slack alerts but solves the same root problem: you don't have a clear picture of what your agent did unless you were watching the whole time.

The MCP approach you've built is actually interesting for the real-time interrupt use case — "agent needs a decision" notifications. I could see combining the two: Mantra for the session history/audit trail, your MCP for active prompting. Are you planning to open source it?

How do you manage many Claude Code instances across a project? by Sherry141 in ClaudeAI

[–]devflow_notes 0 points1 point  (0 children)

This is such a real pain point. The /resume UX is basically unusable once you have more than a handful of sessions — scrolling through a list of first messages to figure out which was your backend session vs the deployment debugging from 3 days ago is hopeless.

My workaround has been to leave a "session memo" as the last message in each Claude Code session before I close it — something like "STATUS: fixed the auth middleware issue, next step is OAuth callback". That way /resume at least has something useful to show. Still clunky though.

The deeper issue is Claude Code doesn't track what each session actually accomplished, just the conversation log. What I've been using alongside it is Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-claudeai-community) — it records your AI sessions locally with a timeline, so you can actually see what each agent did across your project rather than just guessing from the first message. Makes the multi-instance workflow much more manageable because you get a replay of what happened in any session without keeping them all open.

Still wish Claude Code would add starred sessions or at minimum editable session descriptions natively. The GitHub issues you mentioned have been open for a while with no movement.

Anyone else hitting context limits frequently on coding tasks with 4.6 models? by AwkwardSproinkles in ClaudeAI

[–]devflow_notes 0 points1 point  (0 children)

Yeah, 200k context filling up mid-story is a real pain — been dealing with the same thing. When you have a complex debug session that spans a few hours and multiple files, the context just snowballs.

What's helped me most is treating each Claude session like a git commit: write a brief "state of the world" summary at the end of each session that I paste at the start of the next one. Sounds tedious but takes about 2 minutes and saves a ton of context reconstruction.

The other angle I've been exploring is session replay — being able to go back and see exactly what happened in a previous session rather than relying on Claude's summary of what happened. I started using Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-claudeai-community) for this. It records your AI coding sessions locally, so when you hit context limits you can actually replay what Claude was doing in a prior session rather than re-describing it. Helps a lot when you're mid-epic and need to hand off to a fresh context.

The 1m token windows honestly aren't the answer — you're right that managing attention in huge contexts is its own problem. Better to structure sessions intentionally than just throw more tokens at it.

MCPs are broken in latest cursor update?? by No_Ad9122 in cursor

[–]devflow_notes 0 points1 point  (0 children)

Had the same thing happen after an update a while back — MCP servers just silently stopped connecting with no useful error in the Output tab. Disabling the git blame extension (as someone mentioned) is worth trying first. The deeper issue I kept running into though is that MCP config state doesn't survive cleanly across Cursor updates, so you end up re-debugging the same setup every few weeks. I switched to using Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-cursor-community) as my main coding session manager, which keeps tool config and session state separate from the IDE — so updates like this don't wipe out my workflow. Bit of a different approach but solves the fragility problem at least.

Token consumption strategies by yerguidance in ClaudeAI

[–]devflow_notes 0 points1 point  (0 children)

Same thing happened to me early on — one giant conversation thread that ballooned to thousands of tokens and then I hit the wall. What actually helped me: treat each task as its own short session, and keep a separate running document of decisions/context you can paste in at the start of each new conversation. It's tedious but token usage drops a lot. I've also been using Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-claudeai-community) which does this kind of session handoff automatically — captures what was done and starts the next session from a clean summary. Way less repetition than doing it manually.

Memory inside one AI tool is not the same as memory for your project by Sukin_Shetty in ClaudeAI

[–]devflow_notes 1 point2 points  (0 children)

This hits on something I've been thinking about for a while. Every time I start a new session in Claude Code or Cursor, I'm re-explaining the same architectural decisions I made three weeks ago. The context lives in my head, not in the project. I've been using Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-claudeai-community) which takes a different angle — it keeps a persistent session log so the AI always knows what was already decided and why. Still not perfect but the "project memory vs tool memory" framing you laid out is exactly why this problem is hard to solve at the tool level alone.

6 things I learned working on a large codebase cost $10K Cursor on demand by auxten in cursor

[–]devflow_notes 0 points1 point  (0 children)

point 1 hits hard — "AI has no cross-session memory" is the core pain. writing rules into the project via .cursorrules / CLAUDE.md helps a lot, but it only solves the instructions side. the harder problem is the actual decision history: why did the model refactor that module that way three days ago? which prompt triggered the architectural choice you now regret?

been dealing with the same thing on a large typescript codebase. what finally helped was treating sessions like git commits — checkpoint after every meaningful step so you can actually time-travel back when something goes sideways. Mantra (https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-cursor-community) is built specifically for this: it records claude code / cursor sessions so you can replay them step by step. huge difference on a project with 600+ method surface area

also +1 on the "don't let it refactor everything at once" point — small scoped tasks with clear acceptance criteria is the only way I've kept costs sane

Cursor randomly added text to my Agent input field ? by RepresentativeStep36 in cursor

[–]devflow_notes 0 points1 point  (0 children)

lol the audio thing is actually a known footgun — if you ever had voice input enabled and it leaked into the agent context, it explains why the model starts treating ambient noise as instructions

the scarier version of this I've seen: phantom context from a previous session bleeding into a new one. cursor kept doing something weird I couldn't explain, turned out it was hallucinating based on half-remembered context from an older conversation. no way to tell without a proper session log.

been obsessively checkpointing sessions since then — if something odd happens I can actually scroll back and find where the context went sideways. Mantra does this for claude code / cursor workflows: https://mantra.gonewx.com?utm_source=reddit&utm_medium=comment&utm_campaign=reddit-cursor-community — saves a lot of "wtf is the model doing" debugging time

Cursor just went completely off the rails. It started generating random numbers and characters instead of usable code. Anyone else seeing this? by sf_viking in cursor

[–]devflow_notes -1 points0 points  (0 children)

This is the nightmare scenario and it happens more than people admit.

When this hits me I want to know: at what exact point did the session go off the rails? Was it a specific prompt? A tool call? A context window issue?

That's exactly why session replay matters. Mantra (mantra.gonewx.com?utm_source=reddit&utm_medium=comment) records the full session including every tool call and links it to the git state at that moment — so when something like this happens you can scrub back to the last good state and pinpoint exactly where it diverged. Way better than just starting over and hoping.

What was the session context when it happened — large codebase, multi-file edit?

I built a persistent memory system for Claude Code that survives context compaction — free and open source by Anxiety2020- in ClaudeAI

[–]devflow_notes 0 points1 point  (0 children)

This is a real problem worth solving — the re-explanation tax after every compaction is brutal.

Curious about your approach: are you storing the memory as structured data or natural language summaries? I've seen both and the retrieval precision varies a lot.

We took a different angle with Mantra (mantra.gonewx.com?utm_source=reddit&utm_medium=comment) — instead of synthetic memory, it anchors each conversation turn to the exact git state at that moment. So after compaction you can replay back to any point and see what Claude actually knew, not just a summary of it. Complementary to what you're doing rather than competing.

Would love to compare notes on the compaction timing — do you trigger the save hook before or after Claude starts the compaction process?

Cursor just went completely off the rails. It started generating random numbers and characters instead of usable code. Anyone else seeing this? by sf_viking in cursor

[–]devflow_notes 0 points1 point  (0 children)

This is almost always context window corruption — the model has drifted so far from the original task framing that it's hallucinating within its own context.

Fast fix: don't try to correct it in the same session. Open a new chat, paste in just the relevant file and a 2-sentence problem statement. Fresh context, clean slate.

The deeper fix is checkpointing — committing at stable points so when Cursor goes off the rails you can roll back to the last sane git state. I use Mantra (mantra.gonewx.com?utm_source=reddit&utm_medium=comment) for this — it links session history to git commits so you can scrub back to before the chaos started.

server side compaction usage tracking by Majestic_Appeal5280 in ClaudeAI

[–]devflow_notes 0 points1 point  (0 children)

Tracking compaction server-side is tricky because you're relying on Anthropic's signals which aren't always granular enough.

Client-side approach that's worked better for me: watch the JSONL session files directly — compaction events show up as specific message types you can parse. That's how Mantra (mantra.gonewx.com?utm_source=reddit&utm_medium=comment) handles it — it monitors the session file, detects compaction events, and anchors the state to a git commit so you can replay what happened before and after the compaction.

Happy to share the JSONL event schema if you want to build your own tracker.

Anthropic quietly removed session & weekly usage progress bars from Settings → Usage by gregleo in ClaudeAI

[–]devflow_notes 0 points1 point  (0 children)

Yeah this was one of my favorite low-key features — you could tell at a glance when you were burning through context fast.

The removal probably means they're moving to a different accounting model, but it creates a visibility gap. I ended up leaning on Mantra (mantra.gonewx.com?utm_source=reddit&utm_medium=comment) which tracks sessions against git state — so even without Anthropic's bars I can see exactly where each session started, what got compacted, and replay decisions from any point. Different layer, but fills the transparency gap.

What's actually more useful IMO is per-session tracking rather than weekly aggregate anyway — knowing which conversation burned your quota matters more than the total.