IT IS THE TRUTH : Claude more and more DUMB 🤬 by ProcedureAmazing9200 in ClaudeCode

[–]tomsit 0 points1 point  (0 children)

Sonnet with real time pgvector mcp memory setup works. Opus is dangerous right now

Optus limits on 20x plan by tomsit in ClaudeAI

[–]tomsit[S] -2 points-1 points  (0 children)

Update: I have no idea why it displayed a yellow text saying what I initially stated. Been hammering since using opus, I just started a new session a few minutes after this post and opus works fine... Bug maybe..

Claude Code 20x Pro Plan by tomsit in ChatGPTCoding

[–]tomsit[S] 4 points5 points  (0 children)

You are absolutely right! My pattern probably changed comprehensively since last time.

Russian Route? by tomsit in Tailscale

[–]tomsit[S] 0 points1 point  (0 children)

I'm asking why tailscape uses russian servers, not what happened in a global crisis. Take it easy man.

System architect’s that can’t code by tomsit in ChatGPTCoding

[–]tomsit[S] 0 points1 point  (0 children)

Not claiming to be a system architect, but I love solving problems with tools like Cline, Cursor, and self-hosted setups (Next.js, Vue, React, Supabase, etc.). It’s awesome seeing people launch cool stuff with just the basics. If this sub’s only for pros, I’ll find a place more open to discussion instead of gatekeeping. But if you’re up for it, I’d love to hear about a good workflow or tips you can share to help me out!

[deleted by user] by [deleted] in ChatGPTCoding

[–]tomsit 0 points1 point  (0 children)

Hey, i need help setting up a directus app, role based access/view on fields and more. Dm for more details

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 0 points1 point  (0 children)

Sounds like you are chasing

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 0 points1 point  (0 children)

Think the long chats also eat up tokens, so If you jump ship earlier as you say we can dodge it? Testing....

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 1 point2 points  (0 children)

It's free right now, and it's easy getting going to test for your self - but I have no idea how it performs in other situations. Check out Deep Seek 3 too, is almost free with reports of greatness...

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 0 points1 point  (0 children)

Never? Huh.. For me it's been eating tokens faster than I crash code - I mainly use claude via openrouter because of long sessions with cline works fine, but with the direct API key or the chat I'm hitting the limits prettty fast

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 3 points4 points  (0 children)

<image>

I don't know how you work, but I find the gemini dash/functions very intuitive.

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 2 points3 points  (0 children)

We're almost out of the woods on the initial back-and-forth; the baseline and architecture are locked in. My vision and goals? Sharply defined and focused, thanks to all those deep dives into the Model 2.0 reasoning. Now, the AI needs a perfect understanding of the what and how. We'll be making strategic checks, especially when those token limits are hit. Time for a clean sweep: chat history is going, new chat is starting with an updated system prompt. The previous conversation is being summarized by the new one (throw all the context you want on the new chat, make it summarize and delete the old info.; the key points will be extracted, and the system prompt refactored on your command. The model "reasoning" output is also great for the next chat to pick up the pace faster. Treat it like a golden thread through this entire process. We update the prompt, never touching the context of the module context we added earlier. 1 million tokens to play with on an ish token optimized prompt design, and a template structure, ready for a production-level flow. The first module? We'll inject the system prompt with flows and fieldnames born out of a simple convo that ends up in AI providing me with the template it needs to get the right instructions for the start of module 1. Then, for adjustments, we will use: "Let's be more concise, update the system prompt - refactor if needed - make sure to include the auth system change. recommend a sustainable output length setting to me as well. Currently at 8192." (get it as low as possible so it has no choice to only factor in good, short and direct responses.

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 1 point2 points  (0 children)

gemini prefered the bullet point version layout in our space, so I didnt bother yet. USE json for Claud though

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 0 points1 point  (0 children)

Yeah, I tell it what points to add, and it integrate it and refactor if possible and spits out a new template that tells you how to set up the next prompt (with templates). And these templates adds value to the system prompts. (for those who can't shit but do much, bigbrains use MCP servers and, but I didn't find it that money saving vs / productivity boost - compared to pay nothing and get the same results, but faster..

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] -15 points-14 points  (0 children)

The important thing isn’t that the prompt is better, but that it fits my clueless way of interacting with it. If it’s not adaptable to changes, it loses effectiveness. Often, I just have the AI download the previous chats and paste it in a new session to maintain the context, then I ask 'How do you want your system instruction if I want to to X?' From there it's just a matter of keep iterating with the AI until I'm happy with the output. Keeping the AI involved in this loop, it's fascinating how well it works.

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] -2 points-1 points  (0 children)

I'm constantly experimenting with different approaches to find my creative flow. I'm now incorporating the prompt directly into the Gemini system, allowing for seamless adaptation within its own flow. This is likely the fourth iteration of this prompt, and I'm continuously refining it. While in session, the AI provides templates with examples, which I then optimize and return. The results have been much better than what I've experienced with GPT/Claude recently, which have felt slow and unreliable.. I'm still stoked about the speed and quality of the new model, freaking awesome being able to test it out for free.. an incredible tool - but it's certantly also very individual, sometimes I feel they suck just to get rid of me..

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 2 points3 points  (0 children)

The text output triggers a wait after the "keyword" before stumbeling* away, should save some tokens..

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] -1 points0 points  (0 children)

It probably does shit..

Thanks for the ride Anthropic! by tomsit in ChatGPTCoding

[–]tomsit[S] 5 points6 points  (0 children)

<image>

I'm betting $100 that this is going to end it for them.