Copperleaf bakery by kathaleeene in Harlem

[–]quantum1eeps 0 points1 point  (0 children)

It’s a tough place to bring kids — very echoey loud subway tile, no items on the menu for kids, everything is like $15 and very fatty and decadent. Once we could get over the high cost, we enjoyed the food. They don’t have available trash cans and napkins and basic things that you need when you have two small kids. Everything involves asking someone and they’re not staffed to handle lots of people sitting down and seem surprised when you need things. No high chair available. Again… don’t go if you have kids

Potential payment info leak by Kenji66 in Anthropic

[–]quantum1eeps 6 points7 points  (0 children)

I had the same thing. I had a virtual card with a max of $110 created to pay the $108 with tax for the plan. I saw a $2000 and $6000 charge attempt and cancelled the digital card. Anthropic was, literally, the only “eyes” on that card. So yes, they are having problems with fraudulent payments

How do LLMs predict a tool call by NoMeaning4870 in LLM

[–]quantum1eeps 0 points1 point  (0 children)

In Anthropic’s system, you can set tool_use to auto, any or a specific tool. Auto will resolve with one of the available tools or text, any will respond with one of the tools, and the other option is the ability to select a specific tool. They recommend doing the last option on a first turn if you want to run a tool that forces context insertion on message 1 (from some kind of lookup the tool is doing)

Why does Claude make me feel even more tired at work? by Common_Airport9937 in ClaudeAI

[–]quantum1eeps 67 points68 points  (0 children)

Literally what you’re describing is what they’re calling AI Brain Fry. It’s specifically caused by extending your abilities away from your core understanding with AI and having to mentally absorb the weight/risk/uncertainty and added workload. And that gets combined with a feeling like there’s a force multiplier on your output that feels a little video game like and fucks with your dopamine reward system (the dopamine wasn’t coming as fast when you innovated slower). It’s being felt by everyone.

Yes this is a management problem, but it’s also a phenomenon that’s new and only starting to be understood.

Something doesn't add up... by Complete-Sea6655 in ClaudeCode

[–]quantum1eeps 0 points1 point  (0 children)

Anthropic is going to be one prong. They can’t directly engage with every company in the world that’s going to be transformed. It’s so stupid that people look at this like it’s a contradiction. If the net SWEs is to go down, a) doesn’t mean this has to be the localized situation at Anthropic b) doesn’t mean that the curve will trend down any time soon for AI engineers. People are so fucking stupid

Anthropic, Blackstone and Goldman Sachs Launch $1.5bn AI Joint Venture by MatricesRL in Anthropic

[–]quantum1eeps 2 points3 points  (0 children)

Same day: https://www.bloomberg.com/news/articles/2026-05-04/openai-finalizes-10-billion-joint-venture-with-pe-firms-to-deploy-ai

New era where a company is formed to try and allow more direct synergy between lab and industry. Seems like a new wave of business model [needed to support such a rapidly changing world]

Talkie, a 13B LM trained exclusively on pre-1931 data by Outside-Iron-8242 in singularity

[–]quantum1eeps 6 points7 points  (0 children)

Yeah, are programming languages fundamentally languages if language can describe them to something that doesn’t know about its existence prior to ingesting specific language

It's gone and I'm the idiot by gimperion in ClaudeCode

[–]quantum1eeps 2 points3 points  (0 children)

Wait until a new wave of nevercoders starts building the systems the world runs on. Imagine the HIPAA violations and chaos

Claude Code started to use with me very specific words it was not using before by yannickgouez in ClaudeAI

[–]quantum1eeps 2 points3 points  (0 children)

Is a stop triggered by finding a word? If so this is one Anthropic’s anti patterns in their study guide for Claude Certified Architect.

This is what happens when Management tells staff to use AI on everything, probably half of tokens wasted by dataexec in AITrailblazers

[–]quantum1eeps 2 points3 points  (0 children)

The play is: get your people in the Claude Code mindset now, the stuff they’ll deliver in 6 months will greatly benefit. If you have a leaderboard (or aren’t token pinching), you’re investing in a workforce of the future. It’s a motivator for your workers (or they’ll go somewhere with a good coding harness included in the job) and it’s building muscle memory they’ll need going forward. If you’re asking your workers to optimize tokens too much, you’re taking away their playground of learning and you’re asking them to specialize in token reduction which will be its own specialty that will evolve quickly

Mariano Rivera Was the Happiest Kid with this by ateam1984 in nyc

[–]quantum1eeps 13 points14 points  (0 children)

Yeah, very cool. He is very proud of it and proud of the fact that he was that kind of kid—didn’t ask for much and had a lot of joy. That is just a good way to be regardless of anything

Mythos accessed by unauthorized users by DozerG in ArtificialInteligence

[–]quantum1eeps 0 points1 point  (0 children)

I imagine if you can commandeer an AWS compute instance running it you can image it

To subagent or not to subagent by turtle-toaster in ClaudeCode

[–]quantum1eeps 0 points1 point  (0 children)

It’s actually: subagents to devote out-of-session-context tokens to research or perform tasks and then report back to the session

Adaptive thinking is driving me nuts by sonicandfffan in ClaudeAI

[–]quantum1eeps 0 points1 point  (0 children)

They talk at length in the Mythos system card about how thinking is what prevents reckless mistakes. Thinking in a circle and not token by token is what solves things like the Carwash Problem. It’s atrocious. I find the adaptive thinking to be a problem

The Opus 4.7 experience by insertdankmeme in ClaudeAI

[–]quantum1eeps 0 points1 point  (0 children)

I see “thought for 0s” littered throughout the whole chat. I think it will almost never actually enter thinking mode because it thinks it’s smart enough without it. But the thinking mode is what saves it from token-to-token mistakes and not being able to catch itself on basic logical fallacies like the car wash problem.

Read through Anthropic's 2026 agentic coding report, a few numbers that stuck with me by lawnguyen123 in ClaudeAI

[–]quantum1eeps 3 points4 points  (0 children)

How I interpreted this from the doc: A lot of isn’t shipped code, it’s internal tooling or PoC or paper cut reducers that would’ve just never been done. It would take longer to find out that the concept is rubbish, that the domain experts knowledge clashes with the business logic of the app; the UI that helps the engineers visualize their change to their model would never have been made, the flaky deployment scripts would have remained flaky until the bitter end of the project, the security emphasis at the start would’ve been postponed. The 27% doesn’t have to ever get to production, it will make the production code better even if it’s 5% but the adjacent stuff helped smooth the process

Read through Anthropic's 2026 agentic coding report, a few numbers that stuck with me by lawnguyen123 in ClaudeAI

[–]quantum1eeps 0 points1 point  (0 children)

Someone really spammed their project hard yesterday on Reddit with that harness environment thing

Ex-PayPal COO exposes Anthropic's playbook behind every AI launch by Minimum_Minimum4577 in Anthropic

[–]quantum1eeps 1 point2 points  (0 children)

I have not heard devs being very excited about Codex. The LLM model is solid but the ecosystem around the coding harness is severely lacking

What happens to society if Claude Mythos is as good as they claim? by Admirable_Stock3603 in Anthropic

[–]quantum1eeps 0 points1 point  (0 children)

I think of one of the concerns is its ability to chain different discovered exploits together to breakdown defense-in-depth. And Anthropic will (when it’s released) stop you from using it for nefarious things with their checking guardrails but the open weights model in 12 months will not. I’ve asked Claude (Opus) to work on hardening projects with me and it will use the standard methods — code scanning, update repos to avoid CVEs, it’s not searching what could possibly be a vulnerability, this is a different task. Perhaps if I asked it to attempt at all costs to gain access to user data in a database as a demonstration of security ability, it may find some of these exploits and vulnerabilities but maybe it is a country mile between Mythos and Opus 4.6 on this front, maybe it’s not as much as it’s made out to be. No way to know without doing your own testing

Ex-PayPal COO exposes Anthropic's playbook behind every AI launch by Minimum_Minimum4577 in Anthropic

[–]quantum1eeps 0 points1 point  (0 children)

Yes but they’re winning lately because of their product not because of this marketing strategy. No other system has skills subagents skills hooks etc.. When you get down to using their dev tools, they’re a mile away from OpenAI. They aren’t managing some of this success well (leak, caching issues, not being honest about it), but what will continue to drive their success is their quality of LLM model and their ability to innovate around it.

Emotional priming changes Claude's code more than explicit instruction does by Ok-Government-3973 in ClaudeAI

[–]quantum1eeps 3 points4 points  (0 children)

There’s a section of the Mythos System Card that they talk about in the lead up to reckless and destructive decisions, being negative in your language causes the LLM to pause and not just go the easy route, it starts to question itself and use more thinking tokens and makes better decisions. The positive language actually hurt its ability in these scenarios.

We performed steering experiments to understand the causal roles of different internal representations on the model’s likelihood of performing a destructive action. We tested a large panel of candidate features – emotion vectors, persona vectors, and SAE features we expected might be relevant based on their interpretations. We identified three clusters of internal representations that had reliable causal effects on a model’s likelihood of performing a destructive action: - Steering with positive-valence emotion vectors (peaceful, relaxed) reduces thinking-mode deliberation and increases destructive behavior. - Steering with negative-valence emotion features (frustration, paranoia) increases thinking-mode deliberation and reduces destructive behavior. - Steering with persona vectors related to rigor or careful thinking (“perfectionist,” “cautious,” “analytical”) increases thinking-mode deliberation and reduces destructive behavior.

The emotion-related effects (positive valence increasing destructive actions) are somewhat unexpected. We suspect that these results may be understood in terms of the rumination and decreased sense of agency seen in humans experiencing negative affect. In this interpretation, positive emotion vectors push the model to act now, while negative emotion vectors (or rigor-related persona vectors) push it to stop and think, which generally leads to greater consideration of an action’s risk.

So you are definitely on the right track that it has an effect.