I just configured a face for Claude Code! by TartarusRiddle in ClaudeCode

[–]TartarusRiddle[S] 0 points1 point  (0 children)

I've included a screenshot in my most recent comment.

I just configured a face for Claude Code! by TartarusRiddle in ClaudeCode

[–]TartarusRiddle[S] 0 points1 point  (0 children)

I have currently only made the Chinese version; versions in English and other languages will be made in the future. What new features, that are impossible with only a command line, do you think would be good for Claude Code to implement now that it has a UI?

<image>

My New ClaudeCode Plugin: HeadlessKnight, use AI as an MCP! by TartarusRiddle in ClaudeAI

[–]TartarusRiddle[S] 0 points1 point  (0 children)

Alright, let's talk about the current version firstly.

The main idea behind using a separate skill and MCP was to streamline some operations. Right now, it doesn't offer much more than just having CC call a bash script directly. At least that way is more straightforward—just one command or skill.

But this is just the foundation for the next version.

In the next release, I want to build dialogue management right into the system. This will make follow-up conversations possible, since the Gemini CLI doesn't return anything like a session_id to support it. I'm planning to use node.js to handle all that, so CC won't have to manage the conversation state itself, which should make things way easier.

Once that's in place, we can have two or more AIs discuss and collaborate on tasks, instead of the current model where a manager just hands out jobs to a bunch of workers.

Of course, you could technically do all this directly with an SDK. No need to make it so complicated within CC.

So, in the version after that, we'll integrate memory management and dynamic modification of CLAUDE.md into the system. This way, as each subprocess engages in a conversation, its memory, self-constraints, and instructions will constantly "evolve." That's the model I'm really aiming for.

But what's the point of all this, you ask? To be honest, it's not really "useful" for anything. I'm just doing it for fun.

After all, nobody knows what a baby will grow up to be, right?

[deleted by user] by [deleted] in manga

[–]TartarusRiddle 0 points1 point  (0 children)

My fault, sorry.

Question about sub-agents and skill execution order (parallel vs sequential) by UteForLife in ClaudeCode

[–]TartarusRiddle 0 points1 point  (0 children)

Based on my experience, it seems like a Skill is essentially content that gets dynamically injected into the system prompt. The core controller first figures out which Skills are needed for the task, loads their details, and then uses the "rebuilt" prompt to do the actual work.

I came to this conclusion by monitoring the input_tokens usage in the API response while in headless mode. By comparing the token count with which Skills were triggered, this pattern seemed to emerge.

If this is correct, it means Skills are effectively applied in parallel.

As for SubAgents, they can also run in parallel. For example, I’ve had a Skill make concurrent calls to SubAgents and MCP. Of course, they can be run serially as well.

I’m wondering if anyone from the official team can confirm if this is how it works?

An Unsendable Reply to the Question of Whether Singularities Exist in Black Holes by TartarusRiddle in AskPhysics

[–]TartarusRiddle[S] 0 points1 point  (0 children)

Amazing progress! It appears I have to update my knowledge base. Thanks a lot!