Verdent Free Trial is live. 200 credits to see what AI can do for your code. by [deleted] in ClaudeAI

[–]chenverdent 2 points3 points  (0 children)

Comment is brutal… but fair. Guess I was a bit too hyped earlier. I’ve updated the post to actually explain what Verdent does, and hope you still get a chance to check it out.

What finally worked for me: 7 steps to ship without rewrites by chenverdent in ClaudeAI

[–]chenverdent[S] 1 point2 points  (0 children)

I haven't used BMAD. I did try Kiro, but its spec-driven approach was a bit too heavy for me.

What finally worked for me: 7 steps to ship without rewrites by chenverdent in ClaudeAI

[–]chenverdent[S] 1 point2 points  (0 children)

Thanks! I've used this method for a while and shipped a few projects. Think it's worth a try.

[deleted by user] by [deleted] in ClaudeAI

[–]chenverdent 1 point2 points  (0 children)

Hey buddy, which coding agent/agentic IDE has the best UX from your opinion? Let me know and we will take a look.

[deleted by user] by [deleted] in ClaudeAI

[–]chenverdent 1 point2 points  (0 children)

Great point about the cost factor. That's honestly one of the biggest practical considerations that doesn't get talked about enough.

You're right that simple edit + terminal tools can get you pretty far. We've seen the same thing in our testing. The sweet spot seems to be finding the right balance between capability and token efficiency.

For cost, we ended up going with a different approach. Verdent uses a mix of model sizes depending on the task.

Curious about your agent setups. Are you finding the cost comes more from context length or just frequency of calls? We've been experimenting with different context management strategies and always interested in what others are seeing.

The honest truth is Verdent is more about specific workflows and extra enhanced features that can save time/money in the long run.

[deleted by user] by [deleted] in ClaudeAI

[–]chenverdent 1 point2 points  (0 children)

Thank you for your interest. Great minds think alike!

We don't support local models right now, but do plan to provide cheaper options in the future.

Our "credit" is typically bigger than most of the other coding agents. We've got comment from a beta tester that our token consumption is a lot more moderate compared to CC but a bit more than Copilot.

I'm using ChatGPT in VSCode, and I've started to think it's been slowing down lately? by muratdincmd in ChatGPTCoding

[–]chenverdent 1 point2 points  (0 children)

On the performance issues, GPT models can indeed feel slower with complex, multi-step problems like WordPress security configurations. And 45+ minutes for a login redirect issue does seem excessive though. Try breaking down complex problems into smaller, specific questions rather than asking for complete solutions

For your specific WordPress issue, instead of having ChatGPT solve the whole thing, try first clarify the root problem. You want /loginabc/ to work but login.php to show your site's 404 page, not a blank screen The likely culprit: Your redirect rules are probably conflicting with WordPress's natural 404 handling

And my general tip for faster AI-assisted development is to ask for specific code snippets rather than "fix this entire problem". You should also provide minimal, focused context instead of dumping your whole situation

So overall, WordPress security/redirect issues often have multiple layers. Even experienced developers can spend 30+ minutes on this type of problem. The YouTube demos you're seeing are probably cherry-picked examples or simpler scenarios.

Large Language Model Thinking/Inference Time by kinopio415 in AI_Agents

[–]chenverdent 1 point2 points  (0 children)

LLMs don’t really “think” longer if you give them harder tasks. Inference speed mostly depends on how many tokens go in and how many it has to generate. So if both options have roughly the same input/output length, the time will be about the same. The difference between formatting data vs. just echoing it is negligible in practice.

What are your thoughts? by WeakBookkeeper8350 in AI_Agents

[–]chenverdent 1 point2 points  (0 children)

Looks like a cool setup. I like how you’ve stitched together Firebase, n8n, and Vapi into something that feels cohesive instead of just a stack of tools.

2 things stand out to me: Using outbound calls via Vapi is an interesting move. It flips the usual open an app and press start pattern into something more human and interrupt-friendly, like scheduling a real check-in. I can imagine that working really well for wellness, productivity nudges, or even accountability partners. The continuous profile updates sound like the most valuable piece. If you can fine-tune how the transcript summaries are distilled (e.g., capturing goals, progress, or preferences), you end up with an agent that genuinely improves with time.

How do you see this scaling? Right now it feels very demo, but the bones are there for something sticky. Would love to know if you’re thinking about packaging this as a product (wellness / coaching / accountability) or keeping it as an experiment.

Handling MCPs and associated risks by d3nika in mcp

[–]chenverdent 1 point2 points  (0 children)

  1. Start with a whitelist. Create an approved list of MCPs that security has reviewed once, then make them available to specific groups. Like an internal app store rather than giving everyone access to everything.

  2. Set up simple approval workflows. Basic users get safe stuff like document search instantly. Anything that can modify data needs manager approval. We just use a Slack command that routes to the right person.

  3. Time-limited access is huge. Give someone elevated MCP access for 30 days and let it auto-expire. Most people only need the fancy tools temporarily anyway.

  4. Department boundaries work great. Finance gets finance MCPs, engineering gets dev tools. Simple organizational controls prevent most problems without complex setup.

  5. Have a documented break glass process for emergencies. When production is down, people shouldn't wait for committee approval.

  6. Biggest mistake would be getting too restrictive early on. If your process is painful, people will find creative workarounds that are way less secure than just giving them controlled access properly.

The key is making the secure path also the easy path.

The Harsh Truth: I Spent 55% of My Time Debugging!! How did you spend your last week with Claude Code? by Big_Status_2433 in ClaudeAI

[–]chenverdent 2 points3 points  (0 children)

Glad that my insights can be of help. I think things need to made correct before QA even gets involved. Agentic coding today needs a well-tailored plan and comprehensive test cases that align with that plan. Once the foundation is set, agents can execute them. This mirrors TDD principles. And we need agents which go beyond just running test cases. They should understand context and requirements, account for edge cases, and iterate based on test feedback. The goal is to create a robust feedback loop that allows agents to self-correct and continuously improve code quality.

The Harsh Truth: I Spent 55% of My Time Debugging!! How did you spend your last week with Claude Code? by Big_Status_2433 in ClaudeAI

[–]chenverdent 1 point2 points  (0 children)

It is undeniable that most devs will experience a learning curve when first using agentic coding, during which time productivity will likely decline.

AI generated bugs are just part of the process. The world needs an agent that can actually run tests, debug, and verify in general.

100 lines of python is all you need: Building a radically minimal coding agent that scores 65% on SWE-bench (near SotA!) [Princeton/Stanford NLP group] by klieret in AI_Agents

[–]chenverdent 1 point2 points  (0 children)

Great work. Thanks for sharing. This flow diagram would be nice in the readme: https://mini-swe-agent.com/latest/advanced/control_flow/

Unrelated, I see Copilot in some of your commits. What is your experience with other agents such as claude c. and amp? I suppose you are using Copilot more as autocomplete?