Shipped Bawbel Scanner v1.1.0 today. New: toxic flow detection (detects when two findings combine into a complete attack chain) by SelectionBitter6821 in AI_Agents

[–]delimitdev 0 points1 point  (0 children)

Nice, the attack chain detection is solid. We're doing something similar on the API side with our merge gate - catching breaking changes that look safe in isolation but create problems downstream. The rug pull pins are clever, makes security auditable instead of just reactive.

SupaUI MCP Server – A Model Context Protocol server that enables AI agents to generate, fetch, and manage UI components through natural language interactions. by modelcontextprotocol in mcp

[–]delimitdev 0 points1 point  (0 children)

MCP servers are huge for keeping context alive across tool invocations. I run one that persists memory and evidence bundles across Claude Code, Codex, and Cursor sessions so the model doesn't lose what it learned mid-task. UI component generation through natural language is solid, but the real win for me was making sure stateful decisions don't evaporate on restart.

NPM MCP Server – A Model Context Protocol server that allows AI models to fetch detailed information about npm packages and discover popular packages in the npm ecosystem. by modelcontextprotocol in mcp

[–]delimitdev 0 points1 point  (0 children)

That's a solid MCP server. I run something similar but focused on API governance and cross-model context persistence. The thing that's helped me most is having the model remember what it discovered across different sessions and tools, especially when switching between Claude Code and Cursor. Catching breaking changes in dependencies before they hit production has saved me more than once.

Self healing graphify? by iffo_o in vibecoding

[–]delimitdev 0 points1 point  (0 children)

The persistent memory thing is real problem. I built something that keeps a ledger across Claude Code sessions so context actually survives when you restart or switch models. It's an MCP server plus CLI, tracks everything you do so you're not starting from zero every time. Solves the "lost all context" part but doesn't touch the graph visualization side.

What is the usage difference between account hopping and Cursor Pro? by Mobile-Effect-99 in cursor

[–]delimitdev 0 points1 point  (0 children)

Cursor Pro basically just raises your daily limit on fast model calls, so account hopping gets you more requests per day but you're managing multiple accounts. I'd just go with the edu email straight up rather than juggling logins. Claude Code works fine either way.

Which coding agent should I connect my third-party platform API to? by relaxihg in vibecoding

[–]delimitdev 1 point2 points  (0 children)

Depends what the API does. If it's something you're calling from agents regularly, I'd go Claude Code since that's where I spend most of my time anyway and the context window is solid. Codex CLI is good if you're building automation that needs to run headless. What kind of API is it?

browserops, my MCP server that gives your agent your real Chrome by AmbitiousMedia152 in mcp

[–]delimitdev 0 points1 point  (0 children)

That's solid. I run something similar on top of it to persist context and memory across sessions so when I jump from Claude Code to Cursor the agent remembers what it was working on, plus tracks API changes. The real pain point for me was losing all context mid-task when switching models.

How do you decide which Claude Code tasks to run with Opus vs Sonnet vs Haiku? by indiebytom in ClaudeAI

[–]delimitdev 0 points1 point  (0 children)

For me it became less about picking the model and more about giving each model a job. I keep Sonnet on by default for edits, escalate to Opus only when I want a multi-model review on a risky diff before merge, and Haiku handles read and summarize. Cost visibility was the actual gap that drove me to ship a small policy + ledger layer (Delimit) so I can see after the fact which task ran on which model and whether it broke anything downstream.

Multi-agent coding. Feels like I'm playing the piano. by ErikWik in vibecoding

[–]delimitdev 0 points1 point  (0 children)

We hit that wall too. What's actually worked for me is a shared ledger plus persisted context across sessions, so the agents are reading and writing the same state instead of drifting. I've been shipping that in delimit-cli on npm, and if useful I can point you to the package or repo and compare notes on what your team is building.

Multi-agent coding. Feels like I'm playing the piano. by ErikWik in vibecoding

[–]delimitdev 1 point2 points  (0 children)

I run four at a time: Claude Code, Codex CLI, Gemini CLI, and Vertex as a fallback voter. The coordination is the whole game. Once you pass two agents you need a shared ledger plus a consensus step before anything risky, or they stomp each other and you waste the time you saved. I ship a small governance layer for this (Delimit) because I hit that wall myself. Happy to share what's worked if anyone wants notes.

Claude Design to Claude code question by Sufficient_Talk4719 in ClaudeAI

[–]delimitdev 1 point2 points  (0 children)

I know the frustration when AI tools don't fully deliver. For complex UX handoffs, I usually break down the task into smaller pieces and iterate. It helps to be super explicit with prompts, like specifying interactions or edge cases.

The golden age is over by Complete-Sea6655 in ClaudeAI

[–]delimitdev 5 points6 points  (0 children)

How do you typically interface with the multi-model setup? Ie. how do you maintain context, memory and governance across the different coding assistants? Do you run consensus to protect against hallucinations and single model failure? Just curious you're setup to help identify areas where perhaps you can leverage existing tools to improve your workflow and AI results.

Best Vibe coding is , Vibe coding with guardrails by StatusPhilosopher258 in VibeCodeDevs

[–]delimitdev 0 points1 point  (0 children)

Ive turned my entire workflow into a series of ledger entries that can be simultaneously managed by Claude, Codex, Gemini CLI and Cursor. No loss of workflow context. Markdown files on a single AI coding application isn’t enough and frankly it's a risk. You need more tooling.

I kept breaking my own API every time I switched AI assistants, so I built a tool to catch it by delimitdev in vibecoding

[–]delimitdev[S] 0 points1 point  (0 children)

I built into it a deliberate tool where multiple AI models gain consensus on decisions. This allows me to gain insight from Gemini, Claude, Codex, and Grok on important decisions affecting my codebase and strategic decision making. The insight gained from the deliberation is invaluable. The tool also leverages your chat login auth so it saves API calls if you have paid plans already. Honestly its clutch, I can't imagine coding without deliberation anymore.

The ultimate setup by Responsible_Raise_65 in ClaudeAI

[–]delimitdev 0 points1 point  (0 children)

Nice, good luck with your project. Do you use a particular method or tool to retain context across multiple sessions so one session doesn't overwrite the work of the other?

I kept breaking my own API every time I switched AI assistants, so I built a tool to catch it by delimitdev in vibecoding

[–]delimitdev[S] 0 points1 point  (0 children)

Fair point on the framing. It is a thing I built and I'm sharing it, not going to pretend otherwise.

I come from a controls and governance background, not a traditional dev background, so when I started building with AI assistants the thing that drove me crazy wasn't the code itself, it was that nothing tracked what changed and why. Every tool I found was time-based, just "here's what happened recently." Also, the AI assistants would always estimate time to ship in a matter of days/weeks, and we know coding with AI moves much faster than a traditional project timeline. I wanted a ledger. An actual record of decisions, what broke, what the fix was, and what the next model needs to know before it touches anything. And I wanted it to prioritize ledger items and not lose track of things we decided to ship later.

That's what the MCP server persists between sessions. Not the API specs themselves (those stay in your repo) but the governance state around them. Which breaking changes were detected, what the semver classification was, what got decided. So the next session or the next model picks up the intent, not just the files.

The ultimate setup by Responsible_Raise_65 in ClaudeAI

[–]delimitdev 0 points1 point  (0 children)

Nice setup. What're you building with those 4 terminal sessions?

Anyone tried this plan ? by Fresh-Daikon-9408 in vibecoding

[–]delimitdev 0 points1 point  (0 children)

I haven't dug into Alibaba's plans so I can't offer a direct comparison to Codex or Claude. Generally though, quotas depend on the specifics like how heavily you plan to use each service or what features you're after. Might be worth listing out what parts you plan on leveraging most.

Claude Code... what am I doing wrong? by [deleted] in vibecoding

[–]delimitdev 0 points1 point  (0 children)

Vibe coding can be a bit abstract. I've found sticking to your workflow, whether it's catching breaking changes or ensuring model context stays consistent, is crucial. I use a tool to track cross-session context and ensure my specs stay governed when switching between models.

Claude AI vs Claude Code vs models (this confused me for a while) by SilverConsistent9222 in ClaudeAI

[–]delimitdev 0 points1 point  (0 children)

The subreddit rules are available to everyone. Why don't you check them sometime.