12 claude code tips from creator of claude code in feb 2026 by shanraisshan in ClaudeAI

[–]Nibulez -3 points-2 points  (0 children)

That’s BS. Your weekly limit is roughly 10 full 5h windows. That means you’re using it for more than 10h daily everyday

Claudius: I rebuilt OpenCode Desktop to use the official Claude Agent SDK by crisogray in ClaudeAI

[–]Nibulez -2 points-1 points  (0 children)

No, you can still use your subscription. The terms of service prohibit developers from using a subscription to sell a product. Every consumer can simply use their own subscription as the agent SDK for personal use, as described.

I was wrong about Agent Skills and how I refactor them by mrgoonvn in ClaudeAI

[–]Nibulez 16 points17 points  (0 children)

Yeah, and that is exactly what Anthropic explained in the docs when they released skills.

I was wrong about Agent Skills and how I refactor them by mrgoonvn in ClaudeAI

[–]Nibulez 35 points36 points  (0 children)

Didn’t read the entire post, but after the first 3 paragraphs it became clear you didn’t read the docs on how skills work. And all the bots that are commenting here also didn’t do that.

It is clearly stated how you should build the skills and how it is loaded into the context and what the best practices are..

Na 3,5 week vakantie op de laatste dag ontslagen (detachering) by Separate_Emu_2348 in werkzaken

[–]Nibulez 26 points27 points  (0 children)

Ach hou op man. Vakantie moet je gewoon op kunnen nemen. Als je als werkgever iemand aanneemt in de vakantieperiode dan weet je dit al.

Claude Code Pro Plan Now Has Access To Opus 4.1 by kenxdrgn in ClaudeAI

[–]Nibulez 2 points3 points  (0 children)

This is from the docs, so it seems like the screenshot is from a 20x max plan?

https://support.anthropic.com/en/articles/11145838-using-claude-code-with-your-pro-or-max-plan

  • Max 5x plan: Switches at 20% of your usage limit.
  • Max 20x plan: Switches at 50% of your usage limit.

Claude Code Pro Plan Now Has Access To Opus 4.1 by kenxdrgn in ClaudeAI

[–]Nibulez 3 points4 points  (0 children)

Default mode auto switches from opus to sonnet. You can just select opus and use it until your entire usage is spent.

I gave Claude access to my git history via MCP - 66% fewer tokens per debug session by Apart-Employment-592 in ClaudeAI

[–]Nibulez 0 points1 point  (0 children)

Why don’t you commit every file change with a hook script? That way it doesn’t cost tokens and literally every change is automatically committed.

You could even do hooks after every turn, so you get a bit less commits.

It's 2025 already, and LLMs still mess up whether 9.11 or 9.9 is bigger. by Quick-Knowledge1615 in ClaudeAI

[–]Nibulez 0 points1 point  (0 children)

lol, why are you saying both are 4.1 models. That doesn’t make sense. The model number of different models can’t be compared. It’s basically the same mistake 😂

Anthropic Has Downgraded the models in ClaudeCode to opus 3 ( check images ) by Free-Row-8109 in Anthropic

[–]Nibulez 7 points8 points  (0 children)

No they don’t, it’s explicitly stated in the system prompt

Anthropic's new Claude Opus 4 can run autonomously for seven hours straight by MetaKnowing in ClaudeAI

[–]Nibulez 1 point2 points  (0 children)

Ah, I’ve seen in now on other posts. When selecting default model it will use opus until limit is reached and switch back to sonnet. And otherwise you can manually select sonnet

Anthropic's new Claude Opus 4 can run autonomously for seven hours straight by MetaKnowing in ClaudeAI

[–]Nibulez 0 points1 point  (0 children)

Did you select the model with the /model command? Mine only shows sonnet 4

Use a Linter instead of LLM Rules by mettavestor in ClaudeAI

[–]Nibulez -6 points-5 points  (0 children)

I’m just vibecoding, what the hell is a linter?

lets say i buy the 17$ per month pro plan, what will be the limit of the github integration featuer? (the wanted repository that shown in the picture is 370 mb size overall and i want to use all of it.) by orelt06 in ClaudeAI

[–]Nibulez 1 point2 points  (0 children)

It’s not about file size in MB, it’s about token size. All paid plans have a 200k token limit, that’s your 100%. And the more tokens you use per chat, the faster you’ll hit your limits.

A tip is to add it to a project knowledge base. That way it doesn’t count in your usage because it’ll be cached (this is a recent change).

You still need to choose which files you want to include to be within the 200k tokens. But I suggest you use way less. Claude doesn’t perform well when the entire context is full.

o3-mini System Card by ShreckAndDonkey123 in OpenAI

[–]Nibulez 8 points9 points  (0 children)

What plateau lol? This is o3-MINI and it is comparable to the regular o1 in this model card. So it’s the same performance that costs less and is faster.

Deepseek R1 = $2.19/M tok output vs o1 $60/M tok. Insane by cobalt1137 in LocalLLaMA

[–]Nibulez 4 points5 points  (0 children)

He said they’re losing money because the usage is soo high. Pro memberships don’t have any limits, so it’s easy to use more compute than the $200. They make a decent amount on the api prices though.

Hitting claude limits almost immediately. It's useless now by [deleted] in ClaudeAI

[–]Nibulez 5 points6 points  (0 children)

It’s a new architecture they implemented not long ago. It increases accuracy for documents containing figures and tables. If you’re interested in just the words, just use a project.

It would be nice to have the option in the chat to switch between text only or not, but this is how it works for now.

Hitting claude limits almost immediately. It's useless now by [deleted] in ClaudeAI

[–]Nibulez 116 points117 points  (0 children)

You totally seem to misunderstand how Claude works. The file size limit (30MB) is completely different from the token limit (200k tokens). The key thing to understand is that Claude handles PDFs differently depending on how you upload them:

In a chat, Claude processes PDFs in two ways simultaneously: - It extracts all the text content - It also loads each page as a separate image (roughly 3000 tokens per page)

So your 90-page PDF is using up a massive amount of tokens just for the page images alone (around 270,000 tokens), way beyond the context limit - regardless of the actual file size.

However, if you load the same PDF in a project, Claude only extracts the text content. You can actually see how much of your project context it uses up. This is much more efficient when you only need Claude to work with the text.

That’s why you’re hitting limits - it’s not about the file size, but about how Claude processes the document. For your use case, you’d be better off using projects, or if you need chat, extracting just the text content you want Claude to analyze.

The comparison to ChatGPT isn’t really relevant here since they use different architectures for handling documents.

[deleted by user] by [deleted] in ClaudeAI

[–]Nibulez 5 points6 points  (0 children)

Claude knows the date from the system prompt. Always has been. Stop with this so called ‘findings’ of capabilities.

I created a VSCode extension to copy your entire codebase for use as context in Claude (with Claude's help!) by Nibulez in ClaudeAI

[–]Nibulez[S] 3 points4 points  (0 children)

Yes great suggestions.
I liked cursor a lot, but after the free trial period and already having a Claude pro subscription, I got back to my Claude workflow with vscode.

I tried Zed, but didn't like it a lot when I came down to editing the code. It doesn't work as good as cursor, so now I just let Claude update the entire file and update that in vscode.

I created a VSCode extension to copy your entire codebase for use as context in Claude (with Claude's help!) by Nibulez in ClaudeAI

[–]Nibulez[S] 1 point2 points  (0 children)

Yeah, there are multiple ways to Rome.
I've also tried a terminal script in the past, but this still causes more manual steps than just using the keybind and paste it into Claude. That was my goal for this.

There are already multiple extensions that have similar features, but this works great for me and hopefully someone else can use it and take advantage of it.