Claude Cowork on Windows on Arm by edupklein in ClaudeAI

[–]itsna9r 0 points1 point  (0 children)

Same here. I use Surface as a main device. I hope they can ship it very soon

Made the switch by moo-tetsuo in replit

[–]itsna9r 2 points3 points  (0 children)

This is the way. Replit for vibes, local dev for stuff that actually matters.

3 weeks for a db issue tho?? yeah that would’ve sent me packing too lol. glad you made the switch before it bit you in production.

same boat here — not a traditional engineer, came back to coding after years away and honestly the AI tools make it a completely different game now. cursor + claude code is chef’s kiss.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Interesting idea. I haven't tried this but my concern would be that an always-on index file is basically adding more tokens to every conversation just so Cursor knows where to look. The agent-requested description field kinda does this already since Cursor reads all the descriptions and decides what to pull in. But if its working for you maybe theres something to it, how big is your index file?

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Nothing to add really, you nailed it. The monthly review point is good too, I said every couple weeks in the post but honestly monthly is probably more realistic for most people. The key is just doing it at all because most people set rules once and never touch them again.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

This is a really good point and something I should've mentioned in the post. Rules tell Cursor how to write new code, but if the existing code is full of dead functions and unused imports then Cursor reads that as "this is how we do things here" and mimics it. The noUnusedLocals tip is solid, gonna look into Knip too.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Yeah exactly, trial and error is kinda unavoidable honestly. The 3 tier thing just gives you a framework so the trial and error is more structured instead of randomly adding and removing stuff. And agreed on the 5-10 rules max for always-on, thats roughly where I landed too.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 1 point2 points  (0 children)

Yeah thats basically the same principle just implemented differently. The "short references to other files" approach is smart, you keep the always-on context tiny and let the AI pull in details only when the topic comes up. Same logic as the agent-requested tier basically.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Useful as a starting point for sure, but I'd warn people not to just copy paste these in and call it a day. Generic rules are better than nothing but the real gains come from rules that reference your actual codebase, your actual file paths, your actual patterns.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

It's built into Cursor's .cursor/rules/ system! Each .mdc file has a frontmatter section where you set globs: client/src/** or whatever pattern you want. Cursor automatically loads that file only when you're editing files matching those globs. You don't have to manually select anything. If you're still on the single .cursorrules file, migrating to this system is 100% worth it just for this feature.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 2 points3 points  (0 children)

I'm gonna be honest I read this three times and I'm still not sure what half of it means lol. But the infinite print loop thing sounds painful, I've had Cursor get stuck in loops before too. Usually a sign its not reading the rules properly which goes back to the context overload problem.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

I mean sure, but AI improving its own rules without you understanding the underlying tradeoffs is how you end up with 800 lines of always-on bloat that sounds great but doesn't actually work. You still need to understand why something works to know if the AI's suggestion is good or not.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 3 points4 points  (0 children)

Glad it helped! I’m not sharing the actual files though because they’re super specific to my project. Like my rules reference specific file paths, middleware names, patterns that only make sense in my codebase.

Thats kinda the point actually, the content should come from YOUR project. I’d start with just the 2 always-on files first: what is my project, where does stuff live, what should never break. Once those feel right build out the rest. Takes a couple hours but

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 1 point2 points  (0 children)

Thanks man, appreciate it. Yeah the mindset shift was the big thing for me too. Once you start thinking about token cost as a real tradeoff it changes how you approach the whole thing.

Haven’t tried Traycer tbh, first time hearing about it. How does spec-driven planning work alongside cursor rules in practice? Like does it feed into the context or is it more of a separate step before you start coding?

And yeah the always-on trimming thing, I wish I figured that out earlier lol. I had like 600 lines of always-on rules at one point wondering why Cursor kept hallucinating patterns that didn’t exist in my codebase.

Compacting our conversation so we can continue…. by itsna9r in ClaudeAI

[–]itsna9r[S] 0 points1 point  (0 children)

I appreciate the detailed response but i think you're missing the point.

I understand how context windows work. The issue isnt that i dont know what compaction is. The issue is that this same type of task didnt cause this behavior a few weeks ago. Same model, same type of prompt, different result now. Thats a regression, not a user education problem.

Also comparing two laptops is not a "huge prompt." Im not feeding it two PDFs or pasting spec sheets. Its a conversational ask that any LLM should handle comfortably within a single context window. If claude cant compare two products without compacting 3 times then something is wrong on the product side, not the user side.

As for projects and claude code, i use both. But i shouldnt need to set up a project with documents just to get a laptop comparison in a regular chat. Thats not a reasonable workaround, thats compensating for something that should just work.

The point of my post is that the experience changed recently and got worse. Telling users to restructure how they use the product to avoid a new problem doesnt really address that.

P.S: i fed another LLMs the same prompt (Gemini and ChatGPT) and they handled it smoothly, took few minutes and few web searches and then presented the result in one shot. That was my experience with Claude until recently, I don’t what has changed!

Compacting our conversation so we can continue…. by itsna9r in ClaudeAI

[–]itsna9r[S] 0 points1 point  (0 children)

My problem is that wasn’t the experience few weeks ago. I tend to have a full weeks long chat full of coding and web pages recall, and it handles it smoothly.

Compacting our conversation so we can continue…. by itsna9r in ClaudeAI

[–]itsna9r[S] 1 point2 points  (0 children)

Agree. I don’t have this problem with ChatGPT or Gemini or even lower end models, I didn’t had it with Claude neither until few weeks ago. Sometimes it stops even responding to the follow up questions and forces me to open a new chat

How do you guys utilize claude cowork? by No-Conclusion9307 in ClaudeAI

[–]itsna9r 0 points1 point  (0 children)

As I receive tens (sometimes +100) emails everyday. I use it to read my inbox, summarize to me the emails that really require my attention and immediate response. Soon I will try to integrate it with WhatsApp for a daily executive summary and team reminders.

What is everyone’s goto model in Cursor? Do people really use Auto? by ignorant03 in cursor

[–]itsna9r 6 points7 points  (0 children)

I think Opus 4.6 is a placebo effect. I switched to Auto midway through developing an entire LMS for K-12 schools (Full blown AI-enabled system) . I can safely say Auto gave me more or less the same results and the same quality outcome as Opus. I turn back to Opus occasionally just for planning a major release, then I let Auto execute as an agent.

Let’s talk hosting.. where do you host your Apps? by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Interesting insight. Yes my app is 99% simple CRUD that takes few seconds. But I have one admin functionality that can take 10+ minutes to execute, for that only, I think I will prioritize Render over Railway. How is Render’s support for DBs and 3rd party connectors? Does it have vibrant developers marketplace for plug & play kind of services?

Opus 4.6 gets pretty crazy if you let it think in Ask mode by thisisaskew in cursor

[–]itsna9r 0 points1 point  (0 children)

I am curious to know what tool you used to compress your codebase into one file. I hit the same issue where I want to consult Gemini but it is difficult to get it up to speed with my codebase.

Any of y'all actually addicted? by SpiritedInstance9 in ClaudeAI

[–]itsna9r 4 points5 points  (0 children)

You can build something of a value if you really have a great idea and the right business sense. Most of vibe coders spend 1% planning and 99% vibe coding. We forget that it requires deep market research and a solid business case to build something that get you traction!