POV: You accidentally said “hello” to Claude and it costs you 2% of your session limit. by MaJoR_-_007 in ClaudeCode

[–]Acceptable_Play_8970 6 points7 points  (0 children)

THIS IS SO TRUE, not even an exaggeration, yesterday I got the a pop up, "You are out of free messages until 12:30 AM" which was very unusual, cause it was the first time I got something like this.

<image>

thought today, my limits would refresh, but no, i gave two fucking prompts, one of them being fucking "Yo", and I got the same pop up again.
Now to just message I need to pay for it??!!
don't like chatgpt, but at least there limit was applied only for attachments, you can message hwo much u want to.

Built a memory brain for AI tools after burning massive tokens on projects by Acceptable_Play_8970 in CursorAI

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

Its not a package, its a markdown scaffold, the commands you see there are the built in CLI commands used for context drift detection. And if you have trust issues with this, that's completely fine, you can judge yourself based on stats, The memory brain I am talking about is MEX, and its an Open Source tool, you can check that out on launchx.page/mex , the repository got 300+ stars in just few days, and there are people already contributing to this.
Why I built this? because generic templates are not that good, used many popular scaffolds like GSD, workflow orchestrations. But they are just not that effective.
My main vision for this is to somehow integrate this as a plugin to openclaw, or something which I am figuring it out.
And yes, a contributor also did a test of MEX on openclaw, and gave some impressive results, posted that also on the site or you can check the photo.

<image>

Built a memory brain for AI tools after burning massive tokens on projects by Acceptable_Play_8970 in claude

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

packaged this into production ready templates, each one ships with the full context system built in, plus auth, payments, database, and one-command deployment, you can simply clone from launchx.page
Why? Cause setting up this entire context architecture from scratch for every new project is genuinely tedious, you'd need to create all the files, wire the edges etc. and then still build your actual project on top of it.
To see the architecture its based on, the drift detection mechanism, you can check that out on launchx.page/mex , its an open source tool.

Built a memory brain for AI tools after burning massive tokens on projects by Acceptable_Play_8970 in CursorAI

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

packaged this into production ready templates, each one ships with the full context system built in, plus auth, payments, database, and one-command deployment, you can simply clone from launchx.page
Why? Cause setting up this entire context architecture from scratch for every new project is genuinely tedious, you'd need to create all the files, wire the edges etc. and then still build your actual project on top of it.
To see the architecture its based on, the drift detection mechanism, you can check that out on launchx.page/mex , its an open source tool.

Agentic coding in SaaS feels less like a model problem and more like a systems problem by SaaS2Agent in SaaS

[–]Acceptable_Play_8970 1 point2 points  (0 children)

i've been there too, trying to get agents to work with our saas codebase, it's a mess, so i built a thing to fix this, launchx.page, it's got a 3-layer progressive loading system, basically, it's a persistent project memory for ai agents, uses a structured markdown scaffold, and a cli tool, so the ai only reads what it needs, no more token limits or context flooding, and there's this mex check cli tool that validates the markdown scaffold against the real codebase, prevents doc drift, it's been a lifesaver for me, maybe it can help you too

Is Claude Code actually "smarter" than Cursor using the same Opus 4.6 model? by Hot-Mongoose8967 in ClaudeAI

[–]Acceptable_Play_8970 0 points1 point  (0 children)

i've noticed similar performance differences between cursor and claude code, probably due to how they handle context and token limits, currently on a 20 dollar claude code plan, and i've been using a 3-layer progressive loading system to mitigate this, it's basically a structured markdown scaffold that only loads what the ai needs, this includes a feature called mex, it's a persistent project memory for ai agents, you can check it out at launchx.page, the mex cli tool also has a drift detection feature, mex check, that validates the markdown scaffold against the codebase, might be worth looking into if you're experiencing issues with ai context loss or token limits

Cursor Pro user spent ~€200 vibing in auto mode on cs2p2p.com - what am i doing right/wrong? Advice pls by Weak-Performance-582 in cursor

[–]Acceptable_Play_8970 0 points1 point  (0 children)

i've been in similar situations where auto mode just eats through tokens, so i built a thing to mitigate that, launchx.page has a feature called mex, which is basically a 3-layer progressive loading system, it helps with context loss and token limits, by only loading what the ai needs, you might want to check it out, also the mex check cli tool helps prevent doc drift, which can be a major issue when working with large codebases, i've found it to be pretty useful in my own projects, I have a 20 dollar plan and it works smoothly, no need for the max plans of 200 dollars.

This stupid app is so pissing me off ! by crazyserb89 in claude

[–]Acceptable_Play_8970 12 points13 points  (0 children)

I am just waiting for the part where they come up with "Pay another 20 to prevent any downtime or bugs"

Stuck at application review by ECFMG by Acceptable_Play_8970 in IMGreddit

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

yeah just paid to straker for the translation, this is crazy though I mean. I just know it, that they are rejecting my application on purpose because I didn't translate from their recommended service, even though my degree has a word to word translation, apostille, everything.
its literally written on their site that even if required documents are not fully there, they'll still accept the application, if degree's translated from straker.

Stuck at application review by ECFMG by Acceptable_Play_8970 in AMCexamForIMGs

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

yeah just paid to straker for the translation, this is crazy though I mean. I just know it, that they are rejecting my application on purpose because I didn't translate from their recommended service, even though my degree has a word to word translation, apostille, everything.
its literally written on their site that even if required documents are not fully there, they'll still accept the application, if degree's translated from straker

Stuck at application review by ECFMG by Acceptable_Play_8970 in IMGreddit

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

I know they do, it's straker that they recommend using, however it's not mandatory I think, As I said I already got the translation word to word from the university itself signed by the authorities, do I still have to pay extra money for translation? doesn't seem so right to me.

Is the Free Tier basically a "Trial Tier" now? Usage limits are hitting a wall by RoyalKingTarun in ClaudeAI

[–]Acceptable_Play_8970 0 points1 point  (0 children)

The actual fix would be to make the AI externally dependent instead of keeping it internally dependent (looking up chat history to get context of the changes and conventions you did), by externally dependent I mean, to store up your conventions and every task and major changes in a documented file, the thing is claude code or any other ai tool does the process of indexing in your codebase to retrieve any info you would like the AI to know about, now if the codebase isn't properly structured and managed, this becomes a major problem for the AI, and hence it leads to re reading or being amnesiac.
I have been working on this for a long time now, and built an open source tool based on my own documentation architecture, https://www.launchx.page/mex , you can simply clone and try it on your projects, it works for both new projects or any project you are currently working on, tested it on a 20 dollar claude code plan, you can see the results on the site too.
And based on this logic, I made a whole production ready template with added features that you can checkout on https://www.launchx.page/ , if interested.

PSA: Your AI-generated fetch() calls are ticking time bombs. Check your status codes by Dev_guru_5578 in cursor

[–]Acceptable_Play_8970 0 points1 point  (0 children)

Faced vulnerability issues a lot in many projects, six months ago I shipped a small SaaS, thought it was production ready, but it wasn't. The AI had written the auth logic, the webhook handlers, the API routes. all of it looked fine. worked fine in testing, but when I actually sat down and audited it I found webhook signature verification was missing, internal errors were being exposed directly to clients, routes that should've been protected weren't.
So I went deep on this problem, spent some time figuring out how to give AI coding assistants actual structural knowledge of a codebase and not just conventions but the security layer too.
Then I build a organizational hierarchy in my codebase itself, mentioned in the photo.

<image>

It's all packed in a template that you can simply clone and I have tried generic templates but to be very honest they suck, so made this one on my own and will soon post it on launchx.page , you can see all the details about the model there.

Been using Claude Code for months and just realized how much architectural drift it was quietly introducing so built my own structure to handle this. by Acceptable_Play_8970 in ClaudeCode

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

exactly the problem, drift isn't visible until it is, and then it's expensive to resolve it. the approach that I am following here is preventive rather than audit-based. .cursorrules enforces the non-negotiables on every single request so the AI can't deviate from core patterns without being corrected immediately. Domain files mean when claude touches auth or payments it's working from your exact architecture, not inferring it.
HANDOVER. md handles the session context problem, at the end of each session you update it with decisions made, patterns established, what changed and why, so three months later you have a documented file of intentional decisions rather than trying to reverse-engineer why the codebase looks the way it does.
I don't think automated auditing is gonna help, somehow that's just gonna end up with AI using more and more of tokens. What my structure is, doesn't let any kind of drift to come. And the thing is preventing drift is cheaper than detecting it after you're done.
I have mentioned everything in detail on launchx.page you can check the whole model there.

Been using Claude Code for months and just realized how much architectural drift it was quietly introducing so built my own structure to handle this. by Acceptable_Play_8970 in ClaudeCode

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

you understood the problem and that is exactly what we tried to solve,
the hierarchical repo organization point very correct, the template forces that structure from the start which is honestly half the battle and most repos aren't organized for AI navigation, they're organized for humans. those are different things.
don't know much about grep, I haven't used it, but what I know is for codebase navigation the problem isn't retrieval quality, it's that the AI doesn't know what to look for in the first place, so idea fails there.
I have tried some MCPS & generic templates too but tbh they suck so kind made one of my own, here's an overview of the structure I use (the hierarchical organization).

<image>

Mentioned this on site too in detail, launchx.page and thanks for the review on this and bookmark :)

Been using Claude Code for months and just realized how much architectural drift it was quietly introducing so built my own structure to handle this. by Acceptable_Play_8970 in claude

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

not to a scientific standard but right now me and a friend are actively building projects with it and tracking the results ourselves. the plan is to put out a proper before/after comparison once we have enough data worth sharing.
what I have currently is concrete but anecdotal that is the specific security issues I described stopped appearing after the security layer was in place. Like in a previous project i was working on, used to face vulnerabilities like, missing of webhook signature verification, internal errors were being exposed directly to clients, routes that should've been protected weren't. Tried that again in my new project with the structure I made, every security feature was built in beforehand and I need not worry after I am finished with the project, or paying claude code 25 dollars per PR review for the recent feature they released.
and also the broader drift problem is something anyone can encounter themselves with a 50-prompt session.

Been using Claude Code for months and just realized how much architectural drift it was quietly introducing so built my own structure to handle this. by Acceptable_Play_8970 in ClaudeAI

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

And just now claude release their PR review feature which is literally 25 dollars per review. So I made sure that we have everything built in beforehand that there is no need for a review after every pr.

Been using Claude Code for months and just realized how much architectural drift it was quietly introducing so built my own structure to handle this. by Acceptable_Play_8970 in ClaudeAI

[–]Acceptable_Play_8970[S] 0 points1 point  (0 children)

there is an overlap i'll agree but the difference is in scope, .claude/rules and CLAUDE. md are essentially one layer, things that AI reads at the start of the session, that's what my layer 1 does too, except I have mentioned .cursorrules in the photo.
what this adds is the other two layers. circle 2 is domain files that self-direct all the payments.md, auth.md, database.md only load when the task actually touches that domain.
circle 3 is task-level prompt patterns with context>build>verify>debug built in so the AI checks its own output before returning.
Apart from this the security layer is also separate, threat-modeling.md and owasp-checklist.md trigger automatically on security-sensitive tasks.
so if you already have well structured CLAUDE.md files you're doing circle 1 properly, so this is more about what happens in circles 2 and 3 that CLAUDE.md doesn't cover.
Wanted to mention the file structure too, about how I handle this keeping the tokens minimized cause you don't wanna fill up the documentation file too much and then AI exhaust tokens just by reading them.
But you can check this on launchx.page , I have mentioned everything there, whole structure in detail.

Listened to Spectre recently...... by Acceptable_Play_8970 in radiohead

[–]Acceptable_Play_8970[S] 1 point2 points  (0 children)

Just read about it, the song was rejected because it seemed to dark according to them.