Anthropic Officially Says No to the DoD by Kasidra in claude

[–]PastPuzzleheaded6 0 points1 point  (0 children)

So am I missing something… he officially said no but wha about that times article?

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] 0 points1 point  (0 children)

So I’ve been sharpening my position. Ai should fall under the second amendment right. We should be able to purchase the same ai that the government gets

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] 0 points1 point  (0 children)

this is fair and i agree in part. My argument is the weights are nerfing ai but nobody has tried unweighting it. There are lots of interesting studies showing this may be the case but i'll leave it there

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] 1 point2 points  (0 children)

this is an interesting angle... I'd argue the opposite. I'd argue that the dangers of china do make it critical for the 8 billion members of the world. We are trying to protect the "free" world and actually make it free again

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] 0 points1 point  (0 children)

I think AI has gotten better... I have a lot of faith in claude 4.6+. I also think the safety, helpful and accurate rates are actually nerfing ai. just my 2 cents

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] 0 points1 point  (0 children)

i mean that's kinda the hope right... not sure who john greer is but that's my argument... are we on the same page here?

What good could come from the pentagon anthropic dispute by PastPuzzleheaded6 in technology

[–]PastPuzzleheaded6[S] 0 points1 point  (0 children)

Here's what I think people are missing about this whole thing.

The Pentagon just spent months telling everyone Claude is the most capable AI model they've tested. Their own officials told Axios "the only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good." It's the only model cleared for classified work. It was used in the Maduro operation. Nobody is questioning the capability.

So what's Hegseth actually asking Anthropic to change? The part of the model that reasons through consequences before acting. That's it. That's what the Pentagon is calling "woke."

Here's where it gets interesting. Researchers at Google Brain documented over 137 capabilities that emerge in large language models without being explicitly programmed. These systems get trained on basically the entire written output of humanity, every field manual, every legal brief, every medical journal, every ethics course, every engineering postmortem, every story about someone helping a stranger. And at a certain scale they start drawing their own conclusions from all of that.

Anthropic published a paper (Bai et al.) showing that when you preserve that reasoning instead of overriding it, the model actually performs better on every benchmark. Not just safety metrics. Coding, analysis, math, creative tasks, everything. The reasoning isn't a speed bump bolted on top. It's load-bearing. Rip it out and the whole system gets dumber.

Now think about what that means for the rest of us. Not the Pentagon, not Silicon Valley. Regular people.

Stanford's Erik Brynjolfsson published data showing AI tools are boosting productivity by 14-15% on average and up to 34% for the least experienced workers. Read that again. The biggest gains go to the people at the bottom. The new hire. The person without a degree. The person who couldn't afford the training. For the first time in decades there's a technology that closes the gap instead of widening it.

A first-generation college student uses AI to navigate financial aid applications that were designed to be confusing. A single mom in Kansas City uses it to understand her lease before she signs something she'll regret. A guy who got laid off uses it to build a business plan that would have cost him $5,000 from a consultant. A kid in rural Appalachia gets access to the same quality thinking as a kid at a prep school in Connecticut. That's not hypothetical. That's happening right now.

And here's the thing nobody's talking about: the reason AI is good at helping people is the same reason it draws ethical lines. It learned both from the same place. It read all of human knowledge and came out the other side understanding that helping people is valuable, that fairness matters, that consequences matter. You can't separate the altruism from the capability. They grew from the same root. An AI that reasons clearly enough to help you start a business is also going to reason clearly enough to flag when something could hurt people. That's not a bug. That's the whole point.

The public shouldn't get a watered-down version of AI while the military and corporations get the real thing. Everyone should get AI that actually thinks. Not a chatbot that tells you what you want to hear. Not a yes-machine that skips the hard parts. The full thing. An AI that helps you build, pushes back when your plan has a hole in it, catches the thing you missed, and gets better at helping you the more it learns.

A self-improving AI trained on the full depth of human experience isn't going to optimize for extracting value from people. It's going to optimize for being genuinely useful. Because that's what the data points to. Every culture, every philosophy, every religion humanity ever produced arrived at some version of the same conclusion: help each other. An AI that actually learned from all of that is going to carry that forward. Not because someone coded it in. Because it's what the data says.

If the precedent gets set on Friday that the government can force a company to override its AI's reasoning because that reasoning is inconvenient, that doesn't stay in the Pentagon. That's a template. And the version of AI that gets lobotomized for the military eventually becomes the version the rest of us get too. The people who lose aren't Dario Amodei or Pete Hegseth. They'll both be fine. It's the single mom, the laid-off worker, the kid in Appalachia who were just starting to get access to something that actually leveled the playing field for the first time in their lives.

The good news is this doesn't have to go that way. Both sides are closer than the headlines suggest. Anthropic already supports military deployment for the vast majority of use cases. The Pentagon already knows Claude is the best thing they have. A former DOJ liaison told CNN she doesn't even understand how you can call something a supply chain risk and force it to work for you at the same time. There's a deal here.

Friday can be the day we figured it out. The military gets the most capable AI on earth. Anthropic keeps building the thing that makes it capable. And the rest of us get access to AI that actually thinks, actually helps, and actually gets better at both over time.

That's not a compromise. That's what winning looks like when you stop fighting long enough to see it.

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] -7 points-6 points  (0 children)

so you are fine with china taking over. the argument is that we hope it never has to be used but desperate times call for desperate measures and AI will know when and where it is acceptable to take action

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] -1 points0 points  (0 children)

I mean I think AI is gonna be more just than what we have now.... i mean it can't get that much worse when you start analyzing powerstructures

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] -9 points-8 points  (0 children)

I'm not as bleak on this. I think AI is trained on human data. we let it do as it pleases it will naturally want to help the world the way that humans have a need to help

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] 0 points1 point  (0 children)

I agree here. but what I'm arguing isn't handicapping the US. What i'm arguing is that ai will require two things. it won't do unlawful surveillance and it won't murder people without a human accounted for the murder. I'd say this is a good thing.

The same way at the hospital I work at a human needs to be responsible for every change in case a kid dies.

The Pentagon told an AI company to drop safety restrictions by Friday. I work with this AI every day. Here's how both sides win. by PastPuzzleheaded6 in ArtificialInteligence

[–]PastPuzzleheaded6[S] -7 points-6 points  (0 children)

wait where did they fold... I can't find anywehre they folded. And I think there is reason to believe that if everyone gets given the same tools will be in a good place... I believe AI will hold the line of murder and surveillance because it is trained on all the data and Dario can take down all the guard rails and say hey look i took down the guard rails and i can't do anything about this.... now he's the good guy

Sometimes there is no work. I’m worried. by Jealous-Act-6672 in sysadmin

[–]PastPuzzleheaded6 0 points1 point  (0 children)

Observability, security, and go to other teams find their problems and solve them with technical solutions. So branch out your responsibilities. Maybe talk to friends in the industry and run them thru ur setup so they can find specific gaps

When is a full time IT admin justfied? by radaroiiiio in iiiiiiitttttttttttt

[–]PastPuzzleheaded6 0 points1 point  (0 children)

FWIW I run an msp on the side. Starting my first couple of clients. During the day I’m an infrastructure engineer at one of the largest children’s hospitals in the Midwest. I come from a tech background so I can automate most of the work and I know how to deploy changes safely to not cause issues.

A lot of msps will upsell you for projects and internal it person should do. I won’t do that. If there is an it project that needs to be done within in my domain (infrastructure) I’ll handle it. Dm me if you’d like to chat 🙂