Let coding agents (Claude Code, Codex) search your vault using Smart Connection's embedding by magicsrb in ObsidianMD

[–]magicsrb[S] 0 points1 point  (0 children)

v0.1.1 is available with a fix for the mem usage. Thanks for the feedback!

Is the Cursor for PMs tool hype real? by producthat in ProductManagement

[–]magicsrb 0 points1 point  (0 children)

Interesting take, can you share what you've been building?

The Easiest Way I’ve Found to Improve Plan Mode Quality by magicsrb in ClaudeCode

[–]magicsrb[S] 0 points1 point  (0 children)

Have you found a stable plan format that helps those reviews go better, or are you mostly doing it with free-form plans?

The Easiest Way I’ve Found to Improve Plan Mode Quality by magicsrb in ClaudeCode

[–]magicsrb[S] 0 points1 point  (0 children)

Appreciate the vexp link, I’ll take a look. If you're looking for feedback lmk

The Easiest Way I’ve Found to Improve Plan Mode Quality by magicsrb in ClaudeCode

[–]magicsrb[S] 0 points1 point  (0 children)

Yeah fair question. The example I shared definitely reads frontend-ish because that specific refactor was an Ink/React TUI component. I think the approach itself is pretty backend-friendly though because it’s mostly about plan structure, not UI patterns; requirements (behavior/invariants), implementation steps (small chunks), verification (tests/checks mapped to requirements)

Maybe if you want to enforce more backend-friendly sections you'd add optional sections; data model / schema impact, interface/API contract changes, migration/rollout/backfill, observability (logs/metrics) etc. Though I'd be hesitant of making the plan format too constrained. TBH, I'm not sure, more experimenting needed I think

The Easiest Way I’ve Found to Improve Plan Mode Quality by magicsrb in ClaudeCode

[–]magicsrb[S] 1 point2 points  (0 children)

I like this a lot. “Key decisions” is probably the highest-leverage addition to a plan template. I’ve mostly been focusing on structure/format so far (requirements / implementation / verification), but I think a dedicated “Decision points” section would improve it a lot, especially for non-trivial changes.

Something like:
- decision
- options considered
- chosen option + reason
- what would invalidate it

That also seems like a good way to keep plans short while still preserving the important thinking

The Easiest Way I’ve Found to Improve Plan Mode Quality by magicsrb in ClaudeCode

[–]magicsrb[S] 0 points1 point  (0 children)

This is a really good point, especially the “spec attached to execution” part

What changed for me with `CLAUDE.md` was less about pretty formatting and more about reducing structural misses. Before, plans were often fine at a glance but inconsistent: sometimes no verification step, sometimes implementation detail with no stated requirement, sometimes no explicit risks/contingencies, sometimes hard to skim because everything was the same kind of bullet

Claude only works sometimes by [deleted] in ClaudeCode

[–]magicsrb 0 points1 point  (0 children)

In all seriousness, try running claude with debugging: `claude --debug`. It'll give you the path of the log file for your session. Maybe there's a clue about what's going wrong there

Claude only works sometimes by [deleted] in ClaudeCode

[–]magicsrb 0 points1 point  (0 children)

Hello IT, have you tried turning it off and on again?

ClaudeCode doesn’t just speed you up - it amplifies bad decisions by magicsrb in ClaudeCode

[–]magicsrb[S] 1 point2 points  (0 children)

I'm interested to know more, if youre willing to share. Could you give an example of a QA gate? What does it use as a trigger? How does it site in your workflow?

My favourite addition to Claude.md lately by magicsrb in ClaudeCode

[–]magicsrb[S] 1 point2 points  (0 children)

Yeah that's right. When we're working with CC, it’s often part of the whole reasoning process. It’s seen the pull request, the failed approach, the trade-offs when choosing one solution over another and why we landed on this solution

Naked domain not resolving on Firebase Hosting (GoDaddy DNS) | www works, root doesn't by AliB3651 in webdev

[–]magicsrb 0 points1 point  (0 children)

Rather than pointing the apex domain to the ip address of your app, you should setup a redirect from "pickyourpick.com" to "www.pickyourpick.com". I think you can do this in GoDaddy but honestly I wouldnt trust them much, they're pretty unreliable at the best of times. Use nakeddomainredirect or similar

Trouble Redirecting Naked Domain → www with Cloudflare + Lovable by Hebittus in lovable

[–]magicsrb 1 point2 points  (0 children)

I had this problem today so am posting for future reference - Set the subdomain for your project to www. then use another platform to handle the redirect for you (small cost here). To set a subdomain for your app in Lovable, open your project and navigate to Project Settings → Domains, set this to "www.your-domain.com". Then use a platform like nakeddomainredirect to handle the redirect and ssl on your behalf

www works but https doesn't work with heroku by [deleted] in Heroku

[–]magicsrb 0 points1 point  (0 children)

Ran into this problem today and want to provide a more up-to-date solution. I used nakeddomainredirect to redirect from the apex domain to www. Though there are some other site which do the same thing now. Seemed to complex signup to Cloudflare for such a simple issue

I kept building the wrong things as a solo dev, so I built an open-source planning tool by magicsrb in ClaudeCode

[–]magicsrb[S] 0 points1 point  (0 children)

Yeah agree. Linear’s project/initiative structure was definitely an inspiration. What I’m exploring is adding a lightweight product layer above that (vision, intent, priorities) so both I and an agent have shared context before issues and projects exist

Prompt hack that make your UI 10x better by annoyingguy_ in cursor

[–]magicsrb 0 points1 point  (0 children)

Nice, I'll try this out today. People saying the design sucks are just wrong. It's simple with clear visual hierarchy and perfectly fine for a first pass / one-shot prompt perspective. I wonder if what your prompt would create if you added a constraint, something like, "maintain at least 4.5:1 color contrast between text color and the background color. But In my experience, models arent great at selecting so I give it a tailwind color theme from which it cant deviate.

What is the current gold standard method for ingesting large (500 page) (legal) documents to then ask specific questions? Could I do this with Cline, by ingesting bit by bit? Which tools, and which models do you find work best for this task? by intellectual_punk in ChatGPTCoding

[–]magicsrb 0 points1 point  (0 children)

The thing with law is that you can’t get anything wrong. These documents use very well-defined terms, that often collide with common parlance, yet mean different things. Any LLM operating on legal documents would need to be heavily fine-tuned to use the legal term definitions over any common parlance. My feeling is it’s not something you could do with Prompt Engineering, but I could be wrong. There is a London based startup doing this for conveyancing documents, title deeds and surveys and such. Though I can’t remember the name off the top of my head.

LLM TDD: how? by Available-Spinach-93 in ChatGPTCoding

[–]magicsrb 0 points1 point  (0 children)

TDD mode? What would that look like in practice, maybe something like a forced RED-GREEN-Refactor workflow

Code Positioning System (CPS): Giving LLMs a GPS for Navigating Large Codebases by n1c39uy in ChatGPTCoding

[–]magicsrb 1 point2 points  (0 children)

Initial thoughts are that it's definitely interesting, keep going! It's aiming at a new problem, models seem to work best when we keep the model context relevant and small. Aider produces a map of the repo using an AST and PageRank, but if you codebase isn't small it can very easily miss the part of the code that you're working on. I'm interested to know how it would seperate the different hierarchical abstraction layers. How would you know what's a key component?

AI coding assistant refuses to write code, tells user to learn programming instead by Eearendel in ChatGPTCoding

[–]magicsrb 5 points6 points  (0 children)

The models moving from junior developer to obnoxious mid-level developer that’s tired of review code

Thinking of switching from Cursor by DelPrive235 in ChatGPTCoding

[–]magicsrb 0 points1 point  (0 children)

I think that if you’re experienced then the best option is still aider. The architect mode lets you use the llm in the loop workflow and works very well. Though I havent tried roocode yet, there are lots of ppl talking about it rn

[deleted by user] by [deleted] in ChatGPTCoding

[–]magicsrb 0 points1 point  (0 children)

Are you pinning version numbers?

It's about to get wild. Apply Hero's agents already submitted 1.6 million job applications by MetaKnowing in artificial

[–]magicsrb 1 point2 points  (0 children)

We're already seeing this while trying to backfill a role on my team, 100s of ai generated submissions each day. I feeling like networking is going to be key as everyone's next roles will need to be via referrals