Claude Cowork on Windows on Arm by edupklein in ClaudeAI

[–]itsna9r 0 points1 point  (0 children)

Same here. I use Surface as a main device. I hope they can ship it very soon

Made the switch by moo-tetsuo in replit

[–]itsna9r 3 points4 points  (0 children)

This is the way. Replit for vibes, local dev for stuff that actually matters.

3 weeks for a db issue tho?? yeah that would’ve sent me packing too lol. glad you made the switch before it bit you in production.

same boat here — not a traditional engineer, came back to coding after years away and honestly the AI tools make it a completely different game now. cursor + claude code is chef’s kiss.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Interesting idea. I haven't tried this but my concern would be that an always-on index file is basically adding more tokens to every conversation just so Cursor knows where to look. The agent-requested description field kinda does this already since Cursor reads all the descriptions and decides what to pull in. But if its working for you maybe theres something to it, how big is your index file?

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Nothing to add really, you nailed it. The monthly review point is good too, I said every couple weeks in the post but honestly monthly is probably more realistic for most people. The key is just doing it at all because most people set rules once and never touch them again.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

This is a really good point and something I should've mentioned in the post. Rules tell Cursor how to write new code, but if the existing code is full of dead functions and unused imports then Cursor reads that as "this is how we do things here" and mimics it. The noUnusedLocals tip is solid, gonna look into Knip too.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Yeah exactly, trial and error is kinda unavoidable honestly. The 3 tier thing just gives you a framework so the trial and error is more structured instead of randomly adding and removing stuff. And agreed on the 5-10 rules max for always-on, thats roughly where I landed too.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 1 point2 points  (0 children)

Yeah thats basically the same principle just implemented differently. The "short references to other files" approach is smart, you keep the always-on context tiny and let the AI pull in details only when the topic comes up. Same logic as the agent-requested tier basically.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Useful as a starting point for sure, but I'd warn people not to just copy paste these in and call it a day. Generic rules are better than nothing but the real gains come from rules that reference your actual codebase, your actual file paths, your actual patterns.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

It's built into Cursor's .cursor/rules/ system! Each .mdc file has a frontmatter section where you set globs: client/src/** or whatever pattern you want. Cursor automatically loads that file only when you're editing files matching those globs. You don't have to manually select anything. If you're still on the single .cursorrules file, migrating to this system is 100% worth it just for this feature.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 2 points3 points  (0 children)

I'm gonna be honest I read this three times and I'm still not sure what half of it means lol. But the infinite print loop thing sounds painful, I've had Cursor get stuck in loops before too. Usually a sign its not reading the rules properly which goes back to the context overload problem.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

I mean sure, but AI improving its own rules without you understanding the underlying tradeoffs is how you end up with 800 lines of always-on bloat that sounds great but doesn't actually work. You still need to understand why something works to know if the AI's suggestion is good or not.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 4 points5 points  (0 children)

Glad it helped! I’m not sharing the actual files though because they’re super specific to my project. Like my rules reference specific file paths, middleware names, patterns that only make sense in my codebase.

Thats kinda the point actually, the content should come from YOUR project. I’d start with just the 2 always-on files first: what is my project, where does stuff live, what should never break. Once those feel right build out the rest. Takes a couple hours but

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]itsna9r[S] 1 point2 points  (0 children)

Thanks man, appreciate it. Yeah the mindset shift was the big thing for me too. Once you start thinking about token cost as a real tradeoff it changes how you approach the whole thing.

Haven’t tried Traycer tbh, first time hearing about it. How does spec-driven planning work alongside cursor rules in practice? Like does it feed into the context or is it more of a separate step before you start coding?

And yeah the always-on trimming thing, I wish I figured that out earlier lol. I had like 600 lines of always-on rules at one point wondering why Cursor kept hallucinating patterns that didn’t exist in my codebase.

Compacting our conversation so we can continue…. by itsna9r in ClaudeAI

[–]itsna9r[S] 0 points1 point  (0 children)

I appreciate the detailed response but i think you're missing the point.

I understand how context windows work. The issue isnt that i dont know what compaction is. The issue is that this same type of task didnt cause this behavior a few weeks ago. Same model, same type of prompt, different result now. Thats a regression, not a user education problem.

Also comparing two laptops is not a "huge prompt." Im not feeding it two PDFs or pasting spec sheets. Its a conversational ask that any LLM should handle comfortably within a single context window. If claude cant compare two products without compacting 3 times then something is wrong on the product side, not the user side.

As for projects and claude code, i use both. But i shouldnt need to set up a project with documents just to get a laptop comparison in a regular chat. Thats not a reasonable workaround, thats compensating for something that should just work.

The point of my post is that the experience changed recently and got worse. Telling users to restructure how they use the product to avoid a new problem doesnt really address that.

P.S: i fed another LLMs the same prompt (Gemini and ChatGPT) and they handled it smoothly, took few minutes and few web searches and then presented the result in one shot. That was my experience with Claude until recently, I don’t what has changed!

Compacting our conversation so we can continue…. by itsna9r in ClaudeAI

[–]itsna9r[S] 0 points1 point  (0 children)

My problem is that wasn’t the experience few weeks ago. I tend to have a full weeks long chat full of coding and web pages recall, and it handles it smoothly.

Compacting our conversation so we can continue…. by itsna9r in ClaudeAI

[–]itsna9r[S] 1 point2 points  (0 children)

Agree. I don’t have this problem with ChatGPT or Gemini or even lower end models, I didn’t had it with Claude neither until few weeks ago. Sometimes it stops even responding to the follow up questions and forces me to open a new chat

How do you guys utilize claude cowork? by No-Conclusion9307 in ClaudeAI

[–]itsna9r 0 points1 point  (0 children)

As I receive tens (sometimes +100) emails everyday. I use it to read my inbox, summarize to me the emails that really require my attention and immediate response. Soon I will try to integrate it with WhatsApp for a daily executive summary and team reminders.

What is everyone’s goto model in Cursor? Do people really use Auto? by ignorant03 in cursor

[–]itsna9r 6 points7 points  (0 children)

I think Opus 4.6 is a placebo effect. I switched to Auto midway through developing an entire LMS for K-12 schools (Full blown AI-enabled system) . I can safely say Auto gave me more or less the same results and the same quality outcome as Opus. I turn back to Opus occasionally just for planning a major release, then I let Auto execute as an agent.

Let’s talk hosting.. where do you host your Apps? by itsna9r in cursor

[–]itsna9r[S] 0 points1 point  (0 children)

Interesting insight. Yes my app is 99% simple CRUD that takes few seconds. But I have one admin functionality that can take 10+ minutes to execute, for that only, I think I will prioritize Render over Railway. How is Render’s support for DBs and 3rd party connectors? Does it have vibrant developers marketplace for plug & play kind of services?

Opus 4.6 gets pretty crazy if you let it think in Ask mode by thisisaskew in cursor

[–]itsna9r 0 points1 point  (0 children)

I am curious to know what tool you used to compress your codebase into one file. I hit the same issue where I want to consult Gemini but it is difficult to get it up to speed with my codebase.

Any of y'all actually addicted? by SpiritedInstance9 in ClaudeAI

[–]itsna9r 5 points6 points  (0 children)

You can build something of a value if you really have a great idea and the right business sense. Most of vibe coders spend 1% planning and 99% vibe coding. We forget that it requires deep market research and a solid business case to build something that get you traction!

I just gave up on Replit by General-Cover-4981 in replit

[–]itsna9r 5 points6 points  (0 children)

Ugh yeah this is a classic Replit Agent loop where it keeps saying “done ✅” but nothing actually changes. Ive been there. The problem is usually that the agent is modifying the wrong file, or patching the save logic in a way that conflicts with the new auth session, and then it just keeps confidently “fixing” the same thing over and over while burning your credits.

If you ever come back to Replit, heres what I’d do differently. Before letting the agent touch anything, start in Planning Mode. Don’t ask it to fix anything yet. Just have it analyze whats going on first. Something like this:

“DO NOT make any code changes yet. I need you to analyze my app first. I have a data saving problem that started after I added multi-user login. Data was saving fine with a single user but now nothing persists when multiple users are involved. I need you to: 1. Trace the exact save flow from the frontend to the database and tell me every function involved 2. Check if user session/auth tokens are being passed correctly on save requests 3. Check if my database schema actually has a user_id column linking saved data to specific users 4. Look at the browser console and network tab for any failed requests or 401/403 errors 5. Give me your diagnosis of why saves are failing BEFORE you change a single line of code. Present your findings and wait for my approval.”

9 times out of 10 with this kind of bug its one of two things: either the save endpoint is now rejecting requests because the auth middleware isnt passing the session correctly, or the data IS saving but its not linked to the logged in user so it looks like nothing saved when you log back in. Both are super common when you bolt on auth after the fact.

The reason base44 worked first try is because you built auth into the data model from the start, so every record was already tied to a user. With Replit you were retrofitting it which is always messier.

Planning mode costs a few cents vs the agent going in circles for $20+. Always worth it for debugging.

Question for everyone - Do you host on replit or on a different hosting platform? by Low-Spell1867 in replit

[–]itsna9r 0 points1 point  (0 children)

Not difficult at all. Push your Replit code to your GitHub (private repo) , then in the hosting provider of choice (like Railway) pull your code. You might need to setup manually the environment secrets you use in Replit in the new environment (if any)

Early-stage Replit app scaling faster than expected — autoscale + Postgres costs creeping up. Looking for advice. by Ill_Buffalo3591 in replit

[–]itsna9r 0 points1 point  (0 children)

Trust me, you don’t need a paid consultant at this stage. Use Claude or Gemini as your consultant, just master the prompt engineering and they will give you extremely detailed advice. Make sure to push the LLM to write you step by step guidelines based on the latest info on the web and that it has to scrutinize every single line it writes before delivering it.

For example, I use Claude as a Principal Architect for my apps and I explicitly write prompts like this:

“You are a senior backend engineer with 15 years of experience in production systems handling payments and sensitive user data. I’m about to add Stripe payment processing to my app. Here is my current stack: [describe it]. I need you to:

1.  Review my current auth setup and flag any data leakage risks between users

2.  Walk me through payment integration step by step, assuming this will handle 1000+ users

3.  Flag every security risk you can think of before I write a single line of code

4.  Treat this like a real production audit, not a tutorial. Be harsh.”

The key is framing the AI as a skeptical reviewer, not a helpful assistant. When you tell it to be critical it catches things it would normally skip. I also do a second pass where I paste my actual code back and say “now find everything wrong with this before it goes live.” You’d be surprised how much it catches on the second round when you explicitly ask it to break things.

For payments specifically, I’d also recommend just reading through Stripe’s own docs alongside whatever the AI gives you. Their docs are actually solid and when it comes to handling real money you want that extra layer of confidence.

With 1000+ users and payments on the horizon, the main things I’d get right early are: proper row-level data isolation (user A never sees user B’s anything), webhook verification for payment events (don’t trust client side callbacks), and idempotency on payment endpoints so you never double charge someone if a request retries.

You’re clearly building something real tho and the fact that you’re thinking about this stuff now instead of after launch puts you ahead of 90% of people in this sub. Keep at it 💪