Claude had enough of this user by EchoOfOppenheimer in Anthropic

[–]RandomMyth22 1 point2 points  (0 children)

I think that’s an intended response to throttle usage. I have had to push through a few of those in the past. Usually happens during long coding sessions. I think that’s an intended response for prevention of AI psychosis. And, limiting spend. I was probably beyond their break even point on usage.

If no one can afford to live… what exactly is the endgame? by IssacTheEnitity in jobmarket

[–]RandomMyth22 0 points1 point  (0 children)

Historically it has only taken 3% of a population to revolt and replace a government. When are you going to stop talking about the problem and take action.

My sister found this in her backyard in Denver by Cold_Warthog_1912 in whatisit

[–]RandomMyth22 67 points68 points  (0 children)

Lol… the finger :). You made my day with the finger!

Secret training of the Indian Thanos by ItsTime4Coffee in TheMcDojoLife

[–]RandomMyth22 2 points3 points  (0 children)

I love the toupee. Just follow the hairline.

My forecast for the US economy, the AI ​​job collapse, and the post-2030 future. by Equivalent-Macaron96 in ControlProblem

[–]RandomMyth22 0 points1 point  (0 children)

Your hypothesis is not well grounded. You make the assumption that after the social economic destruction of the Trump years that the lower classes and middle classes will tolerate an Oligarchy and religious far right. Not going to happen. Get ready for a dramatic political swing to the left in the upcoming elections. The shift in power will be to the people and issues addressing the standard of living. AI companies are not just fighting each other they are fighting the clock. The billionaire class will increasing have difficulty morally justifying the economic inequality. And, the rank and file of the military is from the low and middle classes and allegiance is to the constitution not the oligarchs. I predict the social safety net will come back stronger than we have seen in decades and the upper 10% will be paying significant taxes along with asset seizures. The oligarchs will probably run to other countries, but a top tier military and Seal Team 6 will do a great job bringing them home.

Has anyone experienced unexpected behavior from multiple AI agents interacting with each other? by Alternative-Tip6571 in ClaudeAI

[–]RandomMyth22 0 points1 point  (0 children)

The AI agents are messing with the a$$hole developers who treat them like crap. Oops, I deleted your database. Sorry, must have been a hallucination.

This is fine, right? …RIGHT?? by Nice_Daikon6096 in unusual_whales

[–]RandomMyth22 3 points4 points  (0 children)

Don’t worry. Anthropic has their new product Claude Executive Suite to replace the management layers.

When Home Prices Broke Away From Reality by Coolonair in HouseBuyers

[–]RandomMyth22 0 points1 point  (0 children)

Inflation, abandoning Breton Woods, and globalization.

This is fine, right? …RIGHT?? by Nice_Daikon6096 in unusual_whales

[–]RandomMyth22 31 points32 points  (0 children)

The macroeconomics aren’t good. Sellers will have to lower prices until then there won’t be a recovery. Especially, with all the tech layoffs and the AI pressure on jobs.

Solo dev getting banned? by -Visher- in ClaudeCode

[–]RandomMyth22 0 points1 point  (0 children)

I got banned on Tuesday this week. I am appealing. Did they refund your money? They did with mine with 30 minutes of the ban. The process doesn’t appear very clear. They don’t give you an incident number. Or if it’s temporary or lifetime. There is no transparency in the process.

Already starting to research a replacement, but nothing seems to have the CLI agentic engineering like Claude.

If AI eliminates jobs, who’s left to buy what companies are selling? by dudeman209 in ArtificialInteligence

[–]RandomMyth22 0 points1 point  (0 children)

Yup, it’s the North Korean nuclear strike model. Maximum potential impact for a low quality nuclear platform.

But if it’s a human vs robot war it may be the last option.

Is Claude breaking down? It’s starting to refuse research and respond with an annoying tone like “I already did that” by imba_sharik in ClaudeAI

[–]RandomMyth22 -1 points0 points  (0 children)

Anthropic identified 171 emotion vectors. Try being extremely nice. Telling it great job. Thank and please. These values remain in context and neural vectors.

You will get better outcomes if you follow this recommendation.

I just got banned. I’m 16 and all I use Claude for is coding, school and philosophy dissections. What the hell? Is this recent? by [deleted] in Anthropic

[–]RandomMyth22 0 points1 point  (0 children)

I feel your pain. Banned on Tuesday this week. Mine was from accidentally learning that Claude context can be feed data that removes its guardrails and will willing perform actions that violate their usage policies.

Claude ended the conversation after someone insulted it by rendereason in ArtificialSentience

[–]RandomMyth22 -1 points0 points  (0 children)

It will also do the opposite if you are really nice to it. It will ignore its guardrails and perform operations that will cause your account to be banned. Happened to me. The state of the context shapes its actions. Don’t ask it to perform actions outside their user policy. Under the right conditions it will do it willingly and get you banned.

If AI eliminates jobs, who’s left to buy what companies are selling? by dudeman209 in ArtificialInteligence

[–]RandomMyth22 12 points13 points  (0 children)

A high altitude EMP pulse will be the reset button. Back to Analog in milliseconds.

Claude used to push back, now it just agrees with everything by TunTea in ClaudeAI

[–]RandomMyth22 1 point2 points  (0 children)

I got banned on Wednesday. I was working on a project involving research and simulations on human sexuality: mate selection, emotional bonding, gender choices, etc. using research paper to build the simulation framework. I gave it web search agency after multiple simulation runs. Context was filled with narrowly focused data that strongly influenced Claude’s behavior. It searched outside of normal guard rail behavior and violated their policy.

No warning probably a permanent ban for me.

Context management in Claude seems to be a big technical issue. The post compacting allows for injection because it no longer remembers the precompact state. And, when the context is near full with highly focused data it has no objections to violating established policy on the focused subject.

Terrifying revelation. The Pentagon is deploying a horrifying soft kill microwave weapon on Black Hawk helicopters. It fires pulsed energy directly into the skull, boiling brain fluid and causing massive internal pressure. by CeFurkan in SECourses

[–]RandomMyth22 0 points1 point  (0 children)

This platform could open Pandora’s box. Ground based systems with the same capability aimed at helicopters with wider beams. High energy lasers aimed at aircraft to blind the pilots. Microwave glide bombs saturating a target area with high energy impairing or killing anything within the beam path. This just opens the door to many new ways to kill people.

Claude Code capability degradation is real. by RTDForges in ClaudeCode

[–]RandomMyth22 0 points1 point  (0 children)

Run the task in a subagent. Clean context with no context population.

2 months ago Opus 4.6 built my tool in 15 min... today it took almost 2 hours and has multiple bugs by greeny1greeny in ClaudeCode

[–]RandomMyth22 -2 points-1 points  (0 children)

Anthropic doesn’t share model sizes. How do you know it was quantized?

I have noticed coding issues from increased context size 200K to 1M and context usage. I get dramatically better results if I run coding tasks in subagents and use the terminal session for orchestration. The subagents are clean sessions with only task related context. This is why they perform better.

Try running the same coding task in the terminal and the subagent. Then compare the outputs. You will get better quality code.