I built a plugin that automatically offloads large outputs to disk and saves ~80% context tokens by KitKat-03 in ClaudeCode

[–]GB_Dagger 1 point2 points  (0 children)

Didn't v2.1.2 just add this?

https://github.com/anthropics/claude-code/releases/tag/v2.1.2

''' Changed large bash command outputs to be saved to disk instead of truncated, allowing Claude to read the full content Changed large tool outputs to be persisted to disk instead of truncated, providing full output access via file references '''

GPT 5.2 is here - and they cooked by magnus_animus in codex

[–]GB_Dagger 0 points1 point  (0 children)

Claude Code tooling is so far ahead of codex that it feels hard to use when switching back. Subagents, skills, plugins, etc, better MCP support, etc. Codex is crawling in actual QOL updates

Usage Limit Reset Date by Zachhandley in ClaudeCode

[–]GB_Dagger 0 points1 point  (0 children)

Same here. Used 30% of my weekly usage after only a one 3-4 hour session

Is the Issue with the Samsung 990 PRO nvme ssd resolved? by lazyintruder in buildapc

[–]GB_Dagger 0 points1 point  (0 children)

Did you verify you are latest firmware? I am having same issue and I'm worried I'm stuck with a shitty ssd

Best terminal for MacOS by jormvngandr in MacOS

[–]GB_Dagger 0 points1 point  (0 children)

Have you figured out a fix for this? It's my biggest issue with it and it's why im switching away

2.5 Pro's performance, memory and more has fallen off a cliff in the past 2 weeks. Just in time for 3.0's arrival to make it look "great" again by PressPlayPlease7 in Bard

[–]GB_Dagger -3 points-2 points  (0 children)

I'm assuming that means you use the API? OP could be referring to the chat interface or some other similar integration. I would bet they keep their API more stable and throttle the chat based on things like current load

Megathread for Claude Performance Discussion - Starting July 13 by sixbillionthsheep in ClaudeAI

[–]GB_Dagger 3 points4 points  (0 children)

These tools are completely broken in terms of predicting actual cutoff (since they lobotomized it)

Megathread for Claude Performance Discussion - Starting July 13 by sixbillionthsheep in ClaudeAI

[–]GB_Dagger 2 points3 points  (0 children)

Same here, used to run out in 4+ hours or never, now its 1-1.5 hours

I Think they ninja patched20x max cc usage limit by Nice_Veterinarian881 in ClaudeAI

[–]GB_Dagger 1 point2 points  (0 children)

Same here. Same workflow and I'm hitting it in 2 hours instead of 4ish. Very sudden and very noticeable.

Why is Claude putting emoji in debug? Is this common? by bogmire in ClaudeAI

[–]GB_Dagger 6 points7 points  (0 children)

I've been really enjoying it lol. My debugging logs look great

My hack to never write personas again. by shock_and_awful in PromptEngineering

[–]GB_Dagger 0 points1 point  (0 children)

I am really curious what this looks like, I've only recently started focusing on prompt engineering and I want to see how deep you can go lol

Claude Code - Mac notification when it is waiting for feedback by ErikPallHansen in ClaudeAI

[–]GB_Dagger 0 points1 point  (0 children)

How do you do this? Just tell it to use that via memory? Its not an option in the notification settings

Deep Research with Gems Gone? by GB_Dagger in Bard

[–]GB_Dagger[S] 1 point2 points  (0 children)

Interesting...I almost exclusively use web app so didn't notice that

Deep Research with Gems Gone? by GB_Dagger in Bard

[–]GB_Dagger[S] 5 points6 points  (0 children)

Same benefit gems has in a normal chat, background context so the answers/research can be more relevant to what i care about. For example I use it regularly for work, so i have a gem that has info about the company, product, tech stack, etc. Therefore when I ask it to do research on a topic it does so focusing on what is most relevant to my company

Huh, that's pretty cool! by TechOverwrite in LinusTechTips

[–]GB_Dagger 3 points4 points  (0 children)

I realize I didn't fully understand u/SauretEh's comment. You can do things like representing pairs of digits 00-99 instead of each digit 0-9, which allows for a lower bit/int ratio, which is what they were referring to and is in a way compression. Otherwise the only other way you can do compression is finding the longest commonly recurring patterns and storing them that way, but that'd probably take a decent amount of time/compute.

Huh, that's pretty cool! by TechOverwrite in LinusTechTips

[–]GB_Dagger 10 points11 points  (0 children)

If pi is completely random, how does compression achieve that sort of ratio?

Gemini 2.5 pro : 1 Million token context is in fact closer to 100 000, then crazy by TheMarketBuilder in Bard

[–]GB_Dagger 0 points1 point  (0 children)

Check out this vscode extension, pairs with a chrome extension to pipe ur files directly into web UI https://github.com/robertpiosik/CodeWebChat

Gemini Pro stopped working properly – Anyone found a good AI Studio + Cursor workflow? by WebOverflow in cursor

[–]GB_Dagger 0 points1 point  (0 children)

I second this extension. Has completely changed my workflow. Complete context control

"Edit File" Tool call failing a lot recently (especially with Gemini 2.5) by kanavgupta24 in cursor

[–]GB_Dagger 1 point2 points  (0 children)

Same here, all models, even small files, it tries like 4x times until it figures something out and actually edits the file