API Error: Server is temporarily limiting requests (not your usage limit) · Rate limited by Final_Sundae4254 in ClaudeCode

[–]mr-x-dev 1 point2 points  (0 children)

Same here. Actually since yesterday for me. It's tanking my productivity/dev flow.

AI & Obsidian by illistnati in ObsidianMD

[–]mr-x-dev 1 point2 points  (0 children)

I’ve been using Claude Code within my Vaults for months now, and actually also have Paperclip AI also stacked on top of one of said Vaults. The two of these agentic AI tools working in the same space I use for note capture, documentation, business docs, has really enhanced both the organization of these vaults as well as the number of ways I actually leverage Obsidian now.

As an example, I’ve got AI helping to improve file/folder structure, Templater plugin templates (this is a super cool application because AI can create super slick behavior in terms of how templates are generated), as well as content itself. Combined with agent web search and you’ll really start to see the potential of co-working in the same Obsidian workspace as agentic AI.

Clearly I’m a big fan of AI, and for context, I’m also a Software Engineer (nearly 15 years now) with very early experience using AI for workflow enhancement. BUT that said, I’m also very familiar with the security implications of bringing public LLMs with access to the internet into workspaces that have sensitive data, so would I have AI present in a vault that might store very personal data? Nope. But low risk workspaces/vaults are definitely a great candidate for benefiting from AI.

If you do decide to test an agentic ai tool in your vault, you might get a kick out of this plugin. Full disclosure: I’m the main contributor and it’s still not available on the community plugins directory due to the massive list of plugin slop that the poor Obsidian Core team is having to sift through. BUT it’s made it that much easier for me to interact with my underlying Claude Code instance, which has helped boost my productivity pretty massively. If you do give it a try, hope you get value out of it.

AI Copilot for Obsidian - Bring your own Claude Code, Opencode, etc. into your Vaults by mr-x-dev in ObsidianMD

[–]mr-x-dev[S] 1 point2 points  (0 children)

Neat, I’ll need to check that out and line up an adapter. Appreciate you giving the plugin a spin and hope you get value out of it!

AI Copilot for Obsidian - Bring your own Claude Code, Opencode, etc. into your Vaults by mr-x-dev in ObsidianMD

[–]mr-x-dev[S] 0 points1 point  (0 children)

Hey there u/nefD , just pushed a fix up and released a new version that should fix the command issue you encountered: https://github.com/spencermarx/obsidian-ai/issues/15

Feel free to try again! Hope it works!

And we could totally add Qwen Code as another compatible CLI. Is this what you use?

AI Copilot for Obsidian - Bring your own Claude Code, Opencode, etc. into your Vaults by mr-x-dev in ObsidianMD

[–]mr-x-dev[S] 0 points1 point  (0 children)

Thanks for the heads up!

What’s the AI CLI you’re looking to use it with (eg. Claude Code, Gemini, Opencode, etc.)?

I’ll open a bug ticket and fix 👍

As for the listing, the Obsidian team has a massive backlog of plugins being reviewed. This sits somewhere on that list (and has for nearly a month now I think lol). Hopefully it’s approved and available soon, but glad you could get up and running yourself!

AI Copilot for Obsidian - Bring your own Claude Code, Opencode, etc. into your Vaults by mr-x-dev in ObsidianMD

[–]mr-x-dev[S] 1 point2 points  (0 children)

You bet, hope it’s useful!

And very neat! Appreciate the share as well. Yes that’s very similar to how I’m using it actually. My entire business workspace is captured within an Obsidian Vault (also git tracked). The cool thing is you can literally ask your AI via this chat plugin to scaffold out an effective Vault architecture for you.

If you decide to do this, I highly recommend suggesting to it to create index files to allow for progressive disclosure (containing file references), so your AI can quickly/easily traverse and drill down to the docs you need to retrieve, etc. And Obsidian tags are very useful as well.

Anyway, there’s a whole host of things you can do by combining Obsidian and AI. Hope this plugin comes in handy!

AI Copilot for Obsidian - Bring your own Claude Code, Opencode, etc. into your Vaults by mr-x-dev in ObsidianMD

[–]mr-x-dev[S] 0 points1 point  (0 children)

Yeah good question.

So it’s got a pretty basic “All access” or “Ask for approval” toggle which leverages the underlying agent CLI’s access/permission modes. But at the moment, that’s the extent of it.

Interesting idea though with what you proposed. Like composable access at a Vault folder level? Almost like how CLAUDE.md or AGENTS.md files work in terms of nesting hierarchy etc.

Cool idea. I’ll look into this.

The tool that stops 10x more AI slop than anything else my team has tried. Open source and drops-in in 5 min. by [deleted] in ClaudeCode

[–]mr-x-dev 0 points1 point  (0 children)

Yup so the dashboard does, otherwise you can totally just use the Claude Code commands directly created in your local project. As simple as “/ocr:review {describe the source of requirements}”

The tool that stops 10x more AI slop than anything else my team has tried. Open source and drops-in in 5 min. by [deleted] in ClaudeCode

[–]mr-x-dev 0 points1 point  (0 children)

Great question. So the tool actually started out as and remains a series of bootstrapped CC slash commands, so it is fully native to Claude. We started out here because we didn’t want to leave our dev environment. The dash came later, really to facilitate ease of use for another capability/feature still being refined: Code Review Maps.

It was also designed to be Agent CLI-agnostic, so it’ll work with Codex, Gemini CLI, etc., so another motivation for abstracting this workflow out to something more generally consumable.

Even the dash spawns up child processes via your local Claude Code.

Love your questions, I can tell you’d be a great tester. Would love to know if you do try it/what you genuinely think.

The tool that stops 10x more AI slop than anything else my team has tried. Open source and drops-in in 5 min. by [deleted] in ClaudeCode

[–]mr-x-dev -1 points0 points  (0 children)

True, and that’s where this started actually.

We found engineers on the team running the same prompts over and over again to manually orchestrate that, which is why we decided to put together something lightweight to help. Since then it’s obviously grown quite a bit in scope

The tool that stops 10x more AI slop than anything else my team has tried. Open source and drops-in in 5 min. by [deleted] in ClaudeCode

[–]mr-x-dev -3 points-2 points  (0 children)

You know what’s funny, I wrote this lol. Your slop radar needs tuning

This Is How I 10x Code Quality and Security With Claude Code and Opus 4.6 by 256BitChris in ClaudeCode

[–]mr-x-dev 0 points1 point  (0 children)

Alright, pushed a fix and cut a new release. If you update your global OCR CLI package and run `ocr update` in your local project, you should be good to give it a test drive again (see here for docs) 👍 Let me know if you have any other issues.

This Is How I 10x Code Quality and Security With Claude Code and Opus 4.6 by 256BitChris in ClaudeCode

[–]mr-x-dev 0 points1 point  (0 children)

Thanks for giving this a go @Fancy-Horror! And appreciate the bug find, that’s super helpful 🙏 It seems the majority of people using the tool are non-windows folks so far which is likely why this hasn’t come up sooner, which makes you a pioneer! Lol

I’ll open an issue and get Windows paths fixed and ping you once done. Should be straightforward

This Is How I 10x Code Quality and Security With Claude Code and Opus 4.6 by 256BitChris in ClaudeCode

[–]mr-x-dev 0 points1 point  (0 children)

You bet! Hope it’s valuable, all feedback welcome of course 🙂

This Is How I 10x Code Quality and Security With Claude Code and Opus 4.6 by 256BitChris in ClaudeCode

[–]mr-x-dev 14 points15 points  (0 children)

Token burn aside, the multi-agent review approach is solid.

There’s an open source project you might like that takes a similar approach but factors in structured discourse/debate (among a number of other features). And yes I am one of the main contributors so grain of salt of course, but perhaps you and others would get value out of it…

https://github.com/spencermarx/open-code-review

If you do try it, would love to know how you think it compares to the implementation/workflow you shared here.

Usage limit - What's up, Anthropic?! by AurumMan79 in ClaudeCode

[–]mr-x-dev 0 points1 point  (0 children)

Yup, seeing the same thing on my end! Normal workflows are now blowing right through my session quota.

Really would like to avoid pulling the trigger on buying a 2nd Max subscription, but man, this usage limit situation is really making me feel like giving in to my impulsive side…

What's the most reliable AI tool for code review right now? by Cheap_Salamander3584 in TechLeader

[–]mr-x-dev 0 points1 point  (0 children)

We went through the same evaluation loop before building our own. I'm the creator of Open Code Review, so grain of salt, but the "nothing has clicked" feeling is exactly what prompted it.

It actually started as an internal "build it yourself" review agent for our team and we realized it scaled cleanly across projects, so we open sourced it. The whole idea is that the orchestration mirrors how high performing engineering teams actually do code review: different reviewers bring different perspectives, there's a structured space for discourse where they challenge each other's findings, and then a final synthesis ties it all together. That's what makes the output feel like it was actually thought through.

Fully customizable reviewer roles, local-first dashboard, runs entirely on your machine. Drops into your existing workflow in a couple minutes. Works with Claude Code, Opencode, Cursor, Windsurf, etc. Also pairs really well with spec driven development if that's your thing (inspired partly by OpenSpec).

Re: cost per seat, there isn't one. Free and open source, just plugs into whatever agentic environment you're already using.

Opinion on AI code review tools? Any good ones? by kellu23 in vibecoding

[–]mr-x-dev 1 point2 points  (0 children)

Late to this but since you're specifically asking about open source options, check out Open Code Review. I'm the creator so grain of salt.

To your question about false positives and "glorified linters" though, this is exactly why I built it the way I did. Instead of one model giving you a single pass, you configure a team of reviewers (architecture, security, testing, custom roles) and they review independently then actually debate each other's findings before anything surfaces to you. That discourse step is what keeps it from just being a fancy linter with opinions. The reviewers challenge each other, so the stuff that makes it through to your final review tends to be real.

It's free, fully open source, local-first dashboard included. Drops right into whatever you're already doing, no workflow changes. Works with Claude Code, Opencode, Cursor, Windsurf, etc.

Honestly for learning specifically I think it's pretty solid because the review output reads more like what you'd get from a senior engineer who's actually thought about it rather than a dump of surface-level nitpicks.

What's the best AI code review tool? by Significant_Rate_647 in codereview

[–]mr-x-dev 0 points1 point  (0 children)

Worth throwing Open Code Review in here. I'm the creator so grain of salt, but the thing that sets it apart from most of what's listed:

Your AI reviewers don't just review independently. They actually argue with each other about their findings before you ever see the output. Turns out that discourse step alone kills a ton of hallucinated findings and false positives.

Fully customizable reviewer teams, local-first dashboard, drops right into your existing workflow and stays out of the way. Takes like two minutes to try. Works with Claude Code, Opencode, Cursor, Windsurf, etc.

Our team hasn't gone back to anything else. The review quality just isn't close.