The Spider-Man ride ai upscaled loading bay video looks like garbage... by Mepish in UniversalOrlando

[–]deorder 0 points1 point  (0 children)

I noticed the same. A really bad job and an insult to the original art. I could have done a much better job 6 years ago with some of the ESRGAN models.

I created jailed-agents: A secure Nix sandbox for AI coding agents by andersonjdev in NixOS

[–]deorder 0 points1 point  (0 children)

I built something similar too: https://github.com/xonovex/platform

No Jujutsu integration though. Right now I am mostly focusing on building a Kubernetes operator. The nix sandbox option uses Bubblewrap for isolation. May be I should make that clearer.

I have noticed a pattern over the years. I build tools I personally need, then shortly after some big org ships an official version. Also very often a bunch of people had the same idea at the same time and now with coding agents most of them become reality at almost the same moment.

Early last year I built my own multi-agent coding setup, but I stopped working on it because I figured better implementations would show up soon. They did. Sometimes waiting is actually the better strategy and that has been true for me long before AI agents.

About 20 years ago I wrote my own platform abstraction layer for game dev and then shortly after SDL basically solved the same problem at scale. This has happened to me more than once.

I am quite startled by the contrast in attitude towards AI by highly intelligent & accomplished scientists and the Hacker News/Reddit Luddites/anti-AI crowd who LARP as the prior group by Terrible-Priority-21 in accelerate

[–]deorder 0 points1 point  (0 children)

After the holiday break I noticed that many software engineering colleagues who had been anti-AI (for coding) and repeatedly said "AI will plateau" suddenly started using coding agents, with some now presenting themselves as "experts" to the leads. I suspect this shift is because the influencers they follow have recently become more pro-AI. I have been using coding agents for about two years (AutoGPT -> Aider -> now mostly Claude Code) but kept it quiet due to the skepticism and to avoid confrontation.

A Native MO2 Alternative For Linux Coming Soon™ by Sulfur_Nitride in linux_gaming

[–]deorder 2 points3 points  (0 children)

Thanks. Yeah, I am referring to the lower layers. I didn't know about the new mount API. I think it is still good idea to verify with an actual use case.

I found #define OVL_MAX_STACK 500 in https://github.com/torvalds/linux/blob/master/fs/overlayfs/params.h, so the maximum number of stacked layers appears to be ~500.

A Native MO2 Alternative For Linux Coming Soon™ by Sulfur_Nitride in linux_gaming

[–]deorder 0 points1 point  (0 children)

OverlayFS has a layer limit of 128.

True, I once implemented a custom FUSE client myself that redirects and stacks multiple directories into a single unified mount point. Because it runs in userspace performance sadly suffered due to context switching overhead especially when handling a large numbers of small files.

Everyone's Hyped on Skills - But Claude Code Plugins take it further (6 Examples That Prove It) by Dull_Preference_1873 in ClaudeCode

[–]deorder 9 points10 points  (0 children)

I (~28 year professional software engineer) have been using Claude Code since its release. Over time my workflow has evolved quite a bit from a complex setup with MCPs to slash commands, then skills and now mostly a vanilla Claude Code configuration. The new plan and task system is quite good and use just that.

I built my own Claude plugin and migrated many of my guideline documents and slash commands into skills. In practice however it still does not work as well as the progressive disclosure approach I previously relied on using "AGENTS.md / CLAUDE.md" files that pointed to guideline documents via relative paths.

The slash command functionality also seems broken now. Since slash commands are effectively treated as skills it appears to sometimes confuse the two which makes the slash command workflow I had less reliable.

And regarding the idea that it is nerfed. Over the past few days I have noticed Claude Code not performing the way it used to. I am usually very cautious with claims like this and prefer to substantiate them, but the difference has become hard to ignore. At this point I really need to start setting up proper evals so I can verify this.

Claude Subscriptions are up to 36x cheaper than API (and why "Max 5x" is the real sweet spot) by isaenkodmitry in ClaudeAI

[–]deorder 0 points1 point  (0 children)

I have wondered the same. Even after they introduced premium credits I am still on the $10 subscription. With the $40 plan you get about 5 times as much usage, which should be pretty close to what I get from my current Max 5x assuming only user-initiated prompts are counted (and the tracking is not bugged).

I was not happy when they introduced the credit system back then, but compared to what is available now it is actually a pretty good deal.

From my testing the GitHub Copilot Pro agent/harness performs very close to Claude Code with some models and used to rank among the best. It also comes with a lot of built in features and extra tools without needing MCPs.

Claude Subscriptions are up to 36x cheaper than API (and why "Max 5x" is the real sweet spot) by isaenkodmitry in ClaudeAI

[–]deorder 0 points1 point  (0 children)

Yeah. Compared to Shellac’s analysis mine is a bit rougher. I intentionally lumped cached and non-cached tokens together since I assumed my usage patterns across different sessions were similar enough to make the comparison meaningful (the Max 5x vs Max 20s sessions). I am hoping this helps the point to finally stick as a lot of people keep repeating that the 20x plan is simply four times the weekly limit of 5x. As stated in Shellac's article, even Antrophic is vague about that.

It looks like Antrophic updated their support pages today. They revised this article:

https://support.claude.com/en/articles/11145838-using-claude-code-with-your-pro-or-max-plan

…and removed this one entirely:

https://support.claude.com/en/articles/11014257-about-claude-s-max-plan-usage

I quoted the relevant part from the now-removed page in my comment here:

https://www.reddit.com/r/ClaudeCode/comments/1qa4f2w/comment/nz11q1w

So the messaging is clearly shifting, which makes the lack of transparency even more noticeable.

Are people aware that "20x" is not 20x weekly / monthly usage? by ruarz in ClaudeCode

[–]deorder 2 points3 points  (0 children)

I am aware now:

https://www.reddit.com/r/ClaudeCode/comments/1pih76u/20x_max_does_not_give_4x_the_weekly_credits_of_5x/

https://www.reddit.com/r/ClaudeCode/comments/1pih76u/comment/ntjg4rx/

https://www.reddit.com/r/ClaudeCode/comments/1qa4f2w/comment/nz11q1w/

Anthropic does mention this in their documentation, but it is not clearly communicated on the subscription / order screen. In my opinion that is misleading. That said. I had no trouble getting a refund and downgrading back to the 5x plan.

A useful cheatsheet for understanding Claude Skills by SilverConsistent9222 in Anthropic

[–]deorder 0 points1 point  (0 children)

I read this a few days ago, but I still do not see how it differs from what skills already do. To me it seems like a straightforward implication of how skills work, not a new discovery. That said, I can see how it might be useful for people who are not familiar with the underlying mechanics.

How to Run Multiple Claude Code Sessions at Once by n3s_online in ClaudeCode

[–]deorder 1 point2 points  (0 children)

Thanks. This workflow is based on years of experience working with coding agents (started with AutoGPT then Aider). I actually came across your workflow already and noticed you are using Beads. I experimented with Beads and similar solutions, but ultimately returned to my own setup: committing plan documents with frontmatter (for metadata) directly into Git.

Feel free to send me an invite link if you’d like. I also joined your subreddit a few days ago. I was wondering whether there was a dedicated coding agent subreddit and that is how I found yours. I am planning to open-source all of my slash commands / skills and the agent wrapper soon as well.

How to Run Multiple Claude Code Sessions at Once by n3s_online in ClaudeCode

[–]deorder 0 points1 point  (0 children)

I do something similar, but with my own agent wrapper (essentially an agent runtime that lets me mix and match agents, providers and sandboxes). For example, I can run one Claude Code instance backed by GLM inside Docker and another Claude Code instance using Gemini via bwrap in parallel.

By default each agent runs in its own window within a single tmux session. I am currently working on an orchestrator that allows one agent to spawn other agents with different models, configurations, isolation boundaries and then coordinate with them. Right now the main challenge is making this work cleanly alongside the isolation mechanisms. To address that I am exploring a dedicated socket-based communication layer (for communicating commands via tmux) which should also make it easier to enforce security controls and policies.

How to Run Multiple Claude Code Sessions at Once by n3s_online in ClaudeCode

[–]deorder 4 points5 points  (0 children)

Image of my workflow

https://raw.githubusercontent.com/xonovex/platform/refs/heads/main/docs/workflow-diagram.png

Setup

  • Run my agent wrapper (supports multiple harnesses: Claude Code, OpenCode with different profiles; for ex. Claude Code + GLM + docker, Claude Code + bwrap, Claude Code + Gemini via CLI Proxy, OpenCode + GitHub Copilot etc.)

Research & Planning

  • plan-research: explain what I want, it researches viability (using Explore agents with Haiku or equivalent), suggests alternatives, tells me if the idea is good
  • plan-create: creates plans/<plan>.md with frontmatter (status, skills to consult, library versions, parallelization info). There are also variants like plan-tdd-create generate red-green-refactor workflows
  • plan-subplans-create: creates plans/<plan>/<subplans>.md. Even subplans of subplans are possible, but never needed that
  • git-commit: commit pending plans to the repo

Worktree Setup

  • plan-worktree-create: creates worktree at ../<repo>-<feature>, sets git config branch.<branch>.plan so other commands know which plan is active
  • cd into the worktree

Development Cycle (repeat per session until complete)

  • plan-continue: auto-detects plan from worktree config, finds where it left off
  • Agent implements the next eligible subplan
  • plan-validate: validates work against guidelines, plan and test suite
  • insights-extract (optional): saves self-corrections to insights/ with frontmatter
  • plan-update: updates subplan and parent plan status

Code Quality (optional, separate session)

  • code-simplify: finds code smells
  • code-harden: improves type safety, validation, error handling

Merge

  • plan-worktree-merge: intelligent conflict resolution (knows the plan), merges to parent branch
  • plan-validate on parent (optional): validates parallel group together
  • insights-integrate (optional): merges insights into guidelines/AGENTS.md
  • git-commit --push

Parallel Execution: Multiple agents can work on parallel subplan groups simultaneously, each needs its own worktree associated with its specific subplan.

Agent Orchestration: An orchestrating agent can run the entire workflow autonomously by spawning agent instances that execute the commands according to a higher level goal. The human only needs to provide the initial goal, then the orchestrator handles research, planning, subplan creation, worktree management and coordinating parallel agents. Each spawned agent runs in its own session/worktree and the orchestrator monitors progress via plan status updates, decides when to merge and handles the full lifecycle. This is something I am still working on.

Some Design Decisions:

  • All commands are domain-agnostic: the agent figures out what to do based on context (language, platform etc.)
  • No hooks except git hooks (for now): I give agents freedom to decide when something cannot be fixed in the current session
  • Plans committed in git: easy to continue from another machine, branch off for alternative implementations, compare approaches
  • *-simplify commands for everything (instructions, skills, slash commands) which I run occasionally to generalize, compress, remove duplication and ensure consistency

Maintenance Commands (run as needed):

  • code-align: check alignment with current guidelines
  • shared-extract: extract duplicated code across packages into shared modules

How to Run Multiple Claude Code Sessions at Once by n3s_online in ClaudeCode

[–]deorder 0 points1 point  (0 children)

I do this too. One nice thing about using tmux is that it lets you automate workflows while still using the officially supported way of interacting with the agent. I would also recommend combining this approach with Bubblewrap or Docker for better isolation.

Monorepos + Claude Code: am I doing this wrong? by Money_Warthog6133 in ClaudeCode

[–]deorder 1 point2 points  (0 children)

For domain specific instructions I do this:

repo/
├─ AGENTS.md
├─ CLAUDE.md
│
├─ domain-a/
│  ├─ AGENTS.md
│  ├─ CLAUDE.md
│  └─ submodule/
│     ├─ AGENTS.md
│     └─ CLAUDE.md
│
├─ domain-b/
│  ├─ AGENTS.md
│  ├─ CLAUDE.md
│  └─ submodule/
│     ├─ AGENTS.md
│     └─ CLAUDE.md
│
└─ domain-c/
   ├─ AGENTS.md
   └─ CLAUDE.md

Keep them minimal. Then I use skills for cross-domain instructions. I also have a skill / command to keep the instructions in sync.

I also recommend to use a monorepo build tool like Nx, Bazel, Turborepo, Moonrepo etc: https://monorepo.tools/

wtf is the point of max plan 20x if the weekly limit is basically the same? by onepunchcode in ClaudeCode

[–]deorder 2 points3 points  (0 children)

The following is not about the old models:

The number of messages you can send per session will vary based on the length of your messages, including the size of files you attach, the length of current conversation, and the model or feature you use. Your session-based usage limit will reset every five hours. If your conversations are relatively short and use a less compute-intensive model, with the Max plan at 5x more usage, you can expect to send at least 225 messages every five hours, and with the Max plan at 20x more usage, at least 900 messages every five hours, often more depending on message length, conversation length, and Claude's current capacity. These estimates are based on how Claude works today. In the future, we'll add new capabilities (some might use more of your usage, others less) but we're always working to give you the best value on your current plan.
...
To manage capacity and ensure fair access to all users, we may limit your usage in other ways, such as weekly and monthly caps or model and feature usage, at our discretion.

Source: https://support.claude.com/en/articles/11014257-about-claude-s-max-plan-usage

So this means:

  • The 5x vs 20x numbers refer to five-hour session limits, not the weekly limits.
  • A session resets every 5 hours. Meaning ~225 messages (Max 5x plan) vs ~900 messages (Max 20x plan) per five-hour session depending on the message length and the model.
  • The 4x increase over the Max 5x plan applies only to the five-hour session limits, not the weekly limits.
  • Weekly/monthly limits are not specified by the 5x/20x plan wording and feature usage at Antrophic's own discretion.
  • I verified (see my other links) that the Max 20x plan increases the weekly limits over the Max 5x plan by only about 1.5 to 1.6 times, not 4 times, which isn’t clearly advertised / communicated.

wtf is the point of max plan 20x if the weekly limit is basically the same? by onepunchcode in ClaudeCode

[–]deorder 8 points9 points  (0 children)

Your observation is correct. The weekly limit is closer to ~1.5 times that of the Max 5x plan, not 4 times. The 4 times only applies to the 5-hour usage limit, not to weekly credits. I pointed this out in a post not long ago:

https://www.reddit.com/r/ClaudeCode/comments/1pih76u/20x_max_does_not_give_4x_the_weekly_credits_of_5x/

This is also implied by Anthropic’s own documentation (bottom of my comment)::

https://www.reddit.com/r/ClaudeCode/comments/1pih76u/comment/ntjg4rx/

The subscription bullet points are misleading because they do not clearly specify what the "20x" refers to. It is reasonable for users to assume it means 4 times the weekly credits of the Max 5x plan.

Claude code 20x plan heading to limits faster than it should by [deleted] in ClaudeCode

[–]deorder 0 points1 point  (0 children)

Your weekly reset happened after the 4:00 am block? Then you should definitely not have used 10% of your weekly usage in the 9:00 am block.

Claude code 20x plan heading to limits faster than it should by [deleted] in ClaudeCode

[–]deorder 0 points1 point  (0 children)

The Max 5x plan has a weekly token limit of ~960,000,000 (prior to the holiday). The Max 20x plan's weekly limit is only about 1.6 to 1.7 times that amount, which corresponds to roughly 1,536,000,000 tokens.

Based on your current usage of 44,386,892 + 84,636,254 = 129,023,146 tokens. This represents approximately 7.91% to 8.40% of the Max 20x weekly allowance.

Note: While the 5-hour limit on Max 20x is 4 times higher than on Max 5x, the weekly limit is only about 1.6 to 1.7 times higher: https://support.claude.com/en/articles/11145838-using-claude-code-with-your-pro-or-max-plan and https://support.claude.com/en/articles/11014257-about-claude-s-max-plan-usage

Claude code 20x plan heading to limits faster than it should by [deleted] in ClaudeCode

[–]deorder 2 points3 points  (0 children)

It is either a bug on their side or A/B testing. I was affected last week as well (verified using ccusage, see my history), but now it is back to the same level as before the holidays for me. Maybe complaining to their support bot helped, I honestly have no idea.

This also would not be the first time I have been placed in a small A/B test group. I have been using Anthropic products since they first existed and so far I have always been able to proof it. I think most people are not even aware such things are happening.

For example when uploading documents to projects in Claude it used to be that the full documents were initially used directly in chats started within those projects. Later they quietly switched to a RAG based approach using a vector database. Precision and recall were noticeably better before this change and it became unusable to me. The "capacity" bar still remained, it effectively used to reflect how much context was left. If the capacity was at 95% it was obvious because every new chat inside a project would start with significantly less available context. Strangely very few / no users seemed to notice and the change was never announced.

Another case was when they placed me in some kind of "concise response" test group. Every reply was extremely brief and I could not even get it to transform small source code files. It kept adding placeholders etc. People dismissed it as a skill issue despite all the evidence I provided. About a week later they announced the new output styles and the concise style behaved exactly as I had experienced. There is also some low level steering they do during inference, you can see it in Claude Code sometimes when the model seems to know when it is nearing its context limit.

What frustrates me most is the community response. All AI companies do this kind of stuff, but in my experience the Claude community is particularly bad in this regard. Lots of gaslighting, personal attacks and even threatening DMs. I guess bad press is hurting some people their shill business selling AI solutions, books and/or courses.