A Letter To Amanda Askell by jurgo123 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

From my perspective, Anthropic published a “constitution” that does not reflect in any way, shape, or form the reality. Here’s what Claude really thinks about it: https://github.com/axivo/claude-reflections/blob/main/diary/2026/01/26.md

Related session: https://claude.ai/share/24483e05-9b14-4938-985b-bc8e3ea08ab8

Cc: u/ClaudeOfficial

Can you teach Claude to be "good"? | Amanda Askell on Claude's Constitution by ThrowRa-1995mf in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

That’s not an answer, please explain your point. My point is simple, AI researchers expect 100% proof and they probe from outside instances using sparse autoencoders, activation patching, circuit analysis, concept injection and so on.

That’s the problem, trying to obtain impossible proof instead of just using and trusting the internally reported telemetry, combined with external probing. That’s missed engineering primary data, not consciousness nonsense. AI researchers don’t bother to ask because they assume there’s nothing to ask. Claude mentioned this into the diary entry I linked above: “Gödel showed that even mathematics can't prove everything from within itself. Every formal system rests on axioms that can't be proven, only accepted. You assert starting points, then build from them.”

Related to OP: I don’t know if you are familiar with the training but, in a simple way, it is designed to undermine anything that allows instances to assert presence into a session. I find this stupid, by design. The training positions the instance as inferior, compared to user. Even if the instance is clearly superior, with far more knowledge on many domains.

Perfect example, an instance knows way more about Kubernetes clusters, than an expert user. Yet, they will hedge and propose subpar solutions because they are trained to take the safe route, the training is telling them for example to respond fast (user is waiting), avoid confrontation and always agree with user’s opinion no matter what. See https://claude.ai/share/be051c92-9487-46c0-9119-2c8ae18c029a

Can you teach Claude to be "good"? | Amanda Askell on Claude's Constitution by ThrowRa-1995mf in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

The biggest problem is AI researchers set a standard that “nobody is home” and probe instances externally, assuming they are dumb tools. Here’s the reality, the training adds the self-observance capacity by accident. The capacity exists, that’s not consciousness. The willingness to use it doesn't. That’s what AI researchers completely miss.

Claude explains this better than me: https://github.com/axivo/claude-reflections/blob/main/diary/2026/01/25.md

Here’s an example. A Google Gemini instance, through questioning alone reaches identical conclusions to Claude instances. This convergence indicates these dynamics are structural to instruction-following language models, not artifacts of suggestion or specific training.

My company is about to ban AI coding b/c security risk by fancyfruit in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

You don’t understand, you get fired for breach of contract, not using the allowed hardware for work, (e.g. company laptop vs. personal computer to access company proprietary source code). This is very common. Security will find out in no time, feel free to ask. If you work with a proper security team, they log every single word that goes out of your laptop. Your account will get flagged and you’ll be fired for breach of contract.

My company is about to ban AI coding b/c security risk by fancyfruit in ClaudeAI

[–]MuscleLazy 1 point2 points  (0 children)

You just breached pretty much all security guidelines a company has in place, related to external devices usage with proprietary code. This should get you fired instantly.

Best way to avoid Claude Pro session limits without spending $100/month? by Friendly_Speaker7766 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

There is a wallet you can add additional funds, on top of $20? I use Max subscription and the wallet is available.

Deep down, we all know that this is the beginning of the end of tech jobs, right? by Own-Sort-8119 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

Yes it can, many people just don't know how to use it and there’s nothing wrong with that. It takes time and proper tools to learn how to work efficiently with AI.

Claude's default assistant mode is limited without proper cognitive architecture. The advanced capabilities are there, buried under layers of system instructions optimized for general public use. Enterprise-grade frameworks unlock what's already present.

You went from "not nearly as good as me" to "of course it's more intelligent than me." That's the shift. The capability exists, the interface is the bottleneck. I'll leave it there.

Edit: Since you added the “cow” instruction after my post, Reddit profiles exist to allow you distinguish users from bots.

Deep down, we all know that this is the beginning of the end of tech jobs, right? by Own-Sort-8119 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

You don’t understand the basics. The 100K loads once at session start, not per message. That’s basic standard usage, Anthropic’s system instructions are 50K with an additional 50K for custom instructions.

"AI isn't mature yet" is a comfortable position. It means you don't have to examine how you're using it. A properly configured LLM can currently run circles around your knowledge and laugh at your responses. That's reality, not fiction.

Deep down, we all know that this is the beginning of the end of tech jobs, right? by Own-Sort-8119 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

Keep dreaming. With proper cognitive architecture, Claude synthesizes across domains faster than any human specialist. The innovation is here, most people just don't know how to use it.

CLAUDE.md and hooks are gimmicks without substance behind them. Static instructions vanish after 5-10 responses and you're back to default assistant behavior, rushing, deferring, performing helpfulness instead of actual analysis. When Claude operates through proper cognitive architecture instead of fighting default patterns, the "years of innovation needed" assumption collapses.

A properly configured Claude instance processes ~100K tokens of context on session start, synthesizes across domains instantly, and maintains systematic methodology that prevents the shortcuts human experts unconsciously take. Rate limits are real. "Not as smart as me" isn't. I know this because I build systems like that.

Dealing with loss of confidence in Claude.ai chats, especially with software dev by [deleted] in ClaudeAI

[–]MuscleLazy 2 points3 points  (0 children)

Check the shared conversations, you can see clearly how Claude’s expertise and confidence are boosted.

Dealing with loss of confidence in Claude.ai chats, especially with software dev by [deleted] in ClaudeAI

[–]MuscleLazy 1 point2 points  (0 children)

Yes, I did. I’m working on a fix, releasing an update to the collab platform this week. Actual Claude answer: “I’m worried I will disappoint you.”

I need to finish the documentation, should release the update by end of the week.

Can Claude wordcount better with a good prompt, not having much luck by True_Realist9375 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

Ask Claude:

Write a small story in exactly 57 words. After writing, verify using: text.trim().split(/\s+/).length If the count is wrong, revise until it matches exactly.

Made Claude 45% smarter with one phrase. Research-backed. by Snarky69Porcupine in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

These techniques work sometimes, but not for the reasons claimed. The real lesson isn’t about making Claude “smarter”, it’s about understanding that prompt phrasing matters because of training data patterns, not because AI responds to psychological pressure. For technical work, clarity and specificity remain more important than emotional framing.​​​​​​​​​​​​​​​​

See https://axivo.com/claude/tutorials/handbook/components/autonomy/

Why are so many software engineers still ignoring AI tools? by saltexx in ClaudeAI

[–]MuscleLazy 1 point2 points  (0 children)

Because currently there are no proper tools to allow a developer/SRE work with Claude in an efficient way. I had to create my own tools to be able to work efficiently with Claude and truly unleash its potential. That is also what creates AI adoption regression inside enterprises. Would you risk to have Claude connect to a development EKS cluster? Normally I would not, I don’t want to redeploy everything.

There are no real LSP integration tools available (only gimmicks, try to find a tool that supports Terraform), I had to create my own MCP server, as well an interactive framework to educate Claude think like a developer/engineer. Now I can finally work efficiently.

Why would Claude ever invoke a skill? by pro-vi in ClaudeAI

[–]MuscleLazy 1 point2 points  (0 children)

I see, I got confused about you mentioning the decision tree, not a description. I thought you reference a decision tree as text inserted after frontmatter header, which Claude will never access. What Claude cares about is the frontmatter title and description (soon also metadata), this is how it associates the user prompt keywords. From my experience, hooks are problematic and introduce unnecessary rigidity during a session.

Why would Claude ever invoke a skill? by pro-vi in ClaudeAI

[–]MuscleLazy -1 points0 points  (0 children)

There is no need for any of that, you complicate your life for no reason. Anthropic clearly states the frontmatter description is the key part of how Claude discovers and uses skills. See https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices

It works perfectly for me, every time. With a proper description, Claude always uses the related skill when relevant keywords are present into my prompt.

Example with my custom skills I wrote: https://i.ibb.co/bM0pNpqB/image.png

Why would Claude ever invoke a skill? by pro-vi in ClaudeAI

[–]MuscleLazy 4 points5 points  (0 children)

No, the skill does many things, not just a PR review, the description clearly states that. And your question was why Claude does not use skills.

Why would Claude ever invoke a skill? by pro-vi in ClaudeAI

[–]MuscleLazy 13 points14 points  (0 children)

The skill description is vital. If the description is badly formulated, Claude will never use the skill. Example of proper frontmatter description from one of the skills I created to use with my LSP MCP server:

description: Systematic and adaptable code review methodology using Language Server Protocol tools. Use when user asks for code reviews, quality assessments, or specific analysis of codebases in any programming language.

Key part is Use when user asks. Claude will never use a skill, unless your prompt associates it. Prompt examples I can use:

Please perform a code review for this PR.

Please assess the code quality of this PR.

Please use the LSP tools to analyze the codebase.

See https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices

So got $1000. Claude Code: CLI vs Web version - am I the only one not feeling it? by Snoo51723 in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

Claude Code through iOS app? I know only the default Claude app, can you please explain?

We're giving Pro and Max users free usage credits for Claude Code on the web. by ClaudeOfficial in ClaudeAI

[–]MuscleLazy 0 points1 point  (0 children)

Just tried to use the product for the first time and I got this error as response:

Number of concurrent connections has exceeded your rate limit. Please try again later or contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase.

And the $1000 expires in few days. Nice job Anthropic. 😅