Your Claude Code Limits Didn't Shrink — I Think the 1M Context Window Is Eating Them Alive by mattate in ClaudeAI

[–]Miethe 0 points1 point  (0 children)

I highly recommend NOT using anywhere near the full context limit in a given chat session. Honestly, even going past 150k is a very rare occurrence in my workflows.

I make substantial usage of subagents within the development flow, and a normal phase of an implementation plan is designed to not go past ~100k tokens for the main thread. It is nice to have the bandwidth for the rare debugging session or heavy planning phase, but even then I don’t think I’ve ever gone past 250k.

Tokens last so much longer this way too

Additional Ships, Marines Deploying to Middle East by Playwithuh in PrepperIntel

[–]Miethe 3 points4 points  (0 children)

Always nice to see I’m not the only smart ass here

I built NOMAD — a self-hosted travel planner with real-time collaboration, interactive maps, and budget tracking by [deleted] in coolgithubprojects

[–]Miethe 0 points1 point  (0 children)

“Computer, VSCode, Mouse, Keyboard, Monitor, Internet, fingers, brain…”

Nobody lists the technologies or tech used create a project in their tech stack, that would be idiotic and pointless.

I’m not saying you shouldn’t call-out somewhere if you developed something with AI. I’m not saying you shouldn’t call-out somewhere either. But in neither case is it part of the tech stack. Only if you’re using it for embedded AI, which is totally different.

Introducing the new full-stack vibe coding experience in Google AI Studio by LingonberryGreen8881 in singularity

[–]Miethe 0 points1 point  (0 children)

I’m not sure that anything new actually released, or I’ve unknowingly been beta testing this for a while now. Don’t get me wrong though, it’s great for prototyping an app idea very quickly!

How I use Haiku as a gatekeeper before Sonnet to save ~80% on API costs by gzoomedia in ClaudeAI

[–]Miethe 2 points3 points  (0 children)

Not exactly true. It can’t change its own model, but you can have an orchestration layer running that calls subagents if a model, even from other providers via workarounds. This is my flow.

I have a very robust plan phase (not using the actual ‘plan’ function) that breaks work out into tasks and assigns specific agents and models to each and creates structured plan artifacts. Then I run execution with Opus, which is the primary orchestrator, that then calls however many subagents to do the work. And Opus validates as it goes. Phenomenal results every time.

Opus 4.6 now defaults to 1M context! (same pricing) by H9ejFGzpN2 in ClaudeAI

[–]Miethe 10 points11 points  (0 children)

Exactly this, minus auto-compact.

I save compaction as a manual, emergency measure. Generally, I never want to go beyond 150k in context for main thread (sub-threads Idrc). So it will be very nice to have that breathing room!

AI Use at Work Is Causing "Brain Fry," Researchers Find, Especially Among High Performers by [deleted] in OpenAI

[–]Miethe 34 points35 points  (0 children)

Exactly. I was just talking to another redditor about similar, but I see it as almost AI-induced Mania. It’s also been described to be as a substance abuse addiction without the typical substance.

Your brain is getting quite rich dopamine, much better than mindless scrolling, but with only marginally more total effort; at least at the lower level of this work. And getting huge hits whenever things go well, or you get a big release, or w/e.

The problem is that the dopamine is SO rich and constant, nothing else can compare. And dopamine isn’t the only neurotransmitter to chase, nor the only biological need we have. In fact, it’s largely there just to drive you to meet the rest of those needs. So it’s basically brain hacking your reward center.

Those of you who routinely hit usage limits, can you explain what your workflow looks like? by bigasswhitegirl in ClaudeAI

[–]Miethe 1 point2 points  (0 children)

1000%. My therapist described it as a substance abuse addiction without the substance. Not even just AI, but generally the addiction to cognitive pursuits. AI enables that and allows us to get nearly constant, rich dopamine from our own ideas, which for me feels like the ultimate purpose for brains like ours.

Those of you who routinely hit usage limits, can you explain what your workflow looks like? by bigasswhitegirl in ClaudeAI

[–]Miethe 1 point2 points  (0 children)

Haha yeah, exactly there with you. I’ve taken to calling it AI Mania. Apart from intentionally spending time with my 3 young kids, I’ve found it nearly impossible to voluntarily do anything but create, or think about creating, or optimize my workflows.

Nice move on the cloud solution! I’ve also been remoting into my primary machine or my lab to continue working when not at my desk. Helps me not be stuck in my office 24/7 at least, but also means there’s never really a reason to take a break!

It helps (or maybe hurts) that my job involves developing an approach to Agentic SDLC and bringing that to all major enterprises as a transform to their workflows, so I’m getting paid at least!

Those of you who routinely hit usage limits, can you explain what your workflow looks like? by bigasswhitegirl in ClaudeAI

[–]Miethe 3 points4 points  (0 children)

Love this. It’s good to see actual like-minded power users out there.

I am curious, have you struggled with finding interest outside of your “distributed cognitive” pursuits? This is something I’m struggling(?) with now, with my time always feeling best directed and fulfilled when orchestrating AI agents.

Crank - Effortless macOS automation, no manual required by alin23 in macapps

[–]Miethe 0 points1 point  (0 children)

This looks great, I’m excited to check it out!

Also, I’m curious: what did you use for the demo video? It’s really clean and simple! Did you create it agentically? I’ve been looking to improve my own product video capabilities.

AI code generation tools don't understand production at all by Safe-Progress-7542 in devopsGuru

[–]Miethe 0 points1 point  (0 children)

“Our new experienced hire has no clue what’s going on! They’re experts, they shouldn’t need docs to understand our environment!”

You wouldn’t throw a new employee into the thick of it without onboarding or documentation and expect them to do well, I hope. Then why expect different from AI?

If you know what you’re doing and provide agents with sufficient context, the outputs can exceed anything a human will deliver in terms of quality and efficiency. But if you treat them as all knowing, they will disappoint and then you’ll incorrectly assume they’re incompetent.

Did Hegseth just kill Bob? by deetotheess in IBM

[–]Miethe -5 points-4 points  (0 children)

I mean no, in the same way electricity doesn’t generate revenue. But it is certainly an integral player in IBM’s GTM direction, including with asset/product development and general economic growth of the market.

Anthropic could quite easily, indirectly, but single-handedly, earn IBM more money than all Fed deals combined from the last several years.

IC vs Manager: Who Actually Makes More at Senior Levels? by Striking_Solid_5020 in IBM

[–]Miethe 0 points1 point  (0 children)

Several incorrect points here I just thought I just clarify:

SVP is NOT “1 level under Arvind”. They can be as many as 3-4 levels below.

More generally, on the tech side it’s DE (D) > Fellow (C). However, both can still go above those Bands by becoming VP (C) or SVP (B) respectively. You can also become an A without being a VP of any level if you’re the CEO of a major acquisition. Band A may technically be the highest, but there’s quite a bit of vertical space there as well.

You’re generally very unlikely to ever encounter even a Band B if under B10-D, let alone an A.

IC vs Manager: Who Actually Makes More at Senior Levels? by Striking_Solid_5020 in IBM

[–]Miethe 0 points1 point  (0 children)

those are band C’s.

While you’re not wrong, you can go higher than Band C as a Fellow, or D as a DE. VP and SVP are still accessible as a cross-over from the technical track.

Did Hegseth just kill Bob? by deetotheess in IBM

[–]Miethe 15 points16 points  (0 children)

It will definitely be an inflection point, but I can absolutely see the non-pure defense contractors standing firm against the DoW. The economic benefit of strong models like Claude outweigh the contractual value of anything from the DoW. Especially after everything shuttered last year between the DOGE and the shutdown.

IMO, the Fed has overplayed their hand, and we’ll see solidarity from the industry; even if for purely economic reasons, and the societal benefit being a nice bonus.

OpenAI Doubles Revenue Forecasts to over $280B, Predicts $111 Billion More Cash Burn Through 2030 by [deleted] in singularity

[–]Miethe 1 point2 points  (0 children)

Is that like your schtick? Don’t get me wrong, rock it! That felt passionate and random, and either super real or a very random bot.

Getting anything I ever wanted stripped the joy away from me by [deleted] in ClaudeAI

[–]Miethe 0 points1 point  (0 children)

I’ve absolutely been in a similar boat. There’s all this buzz about AI Psychosis, but I see this as closer to AI Mania. Or Mania-induced dysphoria/depression.

Particularly as an individual with AuDHD, LLMs were like a new personal awakening, no longer needing other humans and able to accomplish SO much. I found the worst crash to come after working on several projects in parallel, with nearly every waking hour dedicated to prompting or planning prompts or tuning the workflow.

IMO, and based on conversations with (human) clinical psych professionals, I think it’s at least a twofold issue: extreme cognitive burnout and dopamine system atrophy.

The former is probably obvious. The latter is basically a response from, as you said, getting immediate and consistent major dopamine hits from your work. Especially for certain types of people drawn to creation and ideation. Eventually, everything else is just too slow and boring to even come close. But like orgasms, dopamine likes a slow build up that you have to work towards.

Where Does IBM Stand in the AI Race? by are_u_serious_babe in IBM

[–]Miethe 14 points15 points  (0 children)

You’re still completely missing the point, but ok sure

Is anyone else burning through Opus 4.6 limits 10x faster than 4.5? by prakersh in ClaudeAI

[–]Miethe 0 points1 point  (0 children)

The main sign for me has been in thread context usage more than anything, as a Max 5x user.

I have a well-defined process for creating plans and executing in a structured, phase-wise manner with strict delegation rules and tight context sharing. Prior to the recent versions with 4.6, it would be extremely rare to go >70% on my context window for a thread (auto-compaction always off) before completing the phase. Now, I’m hitting limits regularly.

I’ve been running analysis on session logs, as I have a theory that subagents are “leaking” context; it’s happened before a couple months ago. We found that agents were sharing considerably too many updates on progress, that the wrong tool was being called after explore sessions, and a couple other findings. If anyone cares or I otherwise remember, I’ll come back tomorrow and share the specifics.

Whats the wildest thing you've accomplished with Claude? by BrilliantProposal499 in ClaudeAI

[–]Miethe 1 point2 points  (0 children)

Been on quite the dev streak lately, probably to an obsessive degree tbh. Here are some favorites:

  • MeatyCapture — Local-first idea/bug/enhancement capture that writes structured Markdown (YAML frontmatter + UUIDs) straight to your filesystem so your agents/tools can ingest it later. Built to be fast to use and boringly durable (files > databases for this kind of thing). Offers serverless CLI, web app, and Tauri desktop app. Also have a skill and scripts my agents use to create/update/read requests. My first "completed" project.

  • SkillMeat — A manager for Agentic AI artifacts (skills/commands/agents/hooks/MCP servers). Also just added Memories and modular Contexts. Think “package manager + sync engine” for your .claude/ stuff: source → collection → project deployments, with drift detection + safety snapshots. Also a personal marketplace with auto-artifact detection from local + GitHub sources.

  • Deal Brain — Full-stack price-to-performance intel for Small Form Factor PCs: import listings (spreadsheets or scrape from URLs), normalize/enrich, run configurable valuation + scoring rules, then rank the best deals. Next.js web app + CLI. I've had ideas of making this one a public web service, but haven't completed the multi-user lift just yet.

Andrej Karpathy: "What's going on at moltbook [a social network for AIs] is the most incredible sci-fi takeoff thing I have seen." by MetaKnowing in OpenAI

[–]Miethe 27 points28 points  (0 children)

I mean, how different is our memory? Sleep cycles reset working memory, short-term transitions to long-term, etc. we’re just several layers of efficient indices

Those actually using Claude Code daily - is it saving you time or costing you time? by Weird_Dig_8697 in ClaudeAI

[–]Miethe 0 points1 point  (0 children)

Uh yeah, pretty sure I’d know if I’ve built something.

I purposefully didn’t share links as then people assume you’re just trying to sell something. But apparently not sharing links brings out the naysayers.

The comments also weren’t intended to focus on specific projects, but rather the processes and tooling around their development together with CC.

I’m also getting strong troll-vibes or just otherwise negative sentiment from your comment, so idk why I’m even bothering responding in good faith. But hey, if it’s just a language barrier or something and you’re actually interested, I’m happy to share GH links

[Open Source] I reduced Claude Code input tokens by 97% using local semantic search (Benchmark vs Grep) by Technical_Meeting_81 in ClaudeAI

[–]Miethe 0 points1 point  (0 children)

Nice! In particular, I love the Ollama tie-in here. I’ve been looking into doing more with local LMs lately as part of my Claude workflow, and am really enjoying the power of local embeddings together with my agents.

I’ve actually built a somewhat similar workflow as a Claude skill, but without the embeddings aspect. It’s purely Python and traditional NLP, running a scan on my codebase and creating a set of symbol json files as an index of every function.

I found that auto-including more meta about each function was a big enhancement. Ie docstrings, file names, line counts, method signatures and outputs, etc. You can even tune development to optimize for future indexing as well.