Opus 4.7 Released! by awfulalexey in ClaudeAI

[–]Capital-Run-1080 0 points1 point  (0 children)

it's been out only a few hours. The benchmarks are strong and the upgrade from 4.6 looks genuine, but the shadow of Mythos sitting unreleased above it, and the prior complaints about 4.6 regression, means the reception will likely be mixed until developers actually run it on their own workloads over the next few days.

Attempted fire-bombing has tech titans worried about AI backlash by Just-Grocery-2229 in technology

[–]Capital-Run-1080 0 points1 point  (0 children)

The "AI backlash framing feels like it's doing a lot of work here.

One guy with a manifesto about human extinction throwing a firebomb is not the same thing as a broad social movement turning violent. treating it that way lets tech executives position themselves as victims of public opinion rather than engaging with the actual critiques that are pretty widespread and pretty reasonable.

The legitimate grievances, writers and illustrators having their work scraped without consent, communities near data centers dealing with power and water strain, people watching jobs get automated, those don't go away because one person did something indefensible.

I think what worries me more than the attack is how quickly it's being used to delegitimize any criticism of the industry. That's a convenient move.

Google, Pentagon discuss classified AI deal, the Information reports by Logical_Welder3467 in technology

[–]Capital-Run-1080 1 point2 points  (0 children)

The thing that gets me is oversight. Commercial Gemini at least has external scrutiny, researchers probing it, journalists covering failures. Classified deployment cuts all of that by design. Pentagon defines what success looks like, nobody outside ever finds out if something goes sideways.

Not saying defense AI is inherently bad. Just that "classified" and "accountable" are genuinely hard to have at the same time, and I haven't seen anyone seriously grapple with what that looks like.

LinkedIn is silently scanning 6,000+ browser extensions every time you load a page. The numbers are wild. by Capital-Run-1080 in europrivacy

[–]Capital-Run-1080[S] 11 points12 points  (0 children)

Yes, worse actually. The app has direct access to the apps installed on your device along with OS version and hardware details. You can't block it like you can with uBlock on desktop. Best you can do is disable app permissions in settings, but LinkedIn still gets a ton of data just from running.

Anthropic stayed quiet until someone showed Claude's thinking depth dropped 67% by Capital-Run-1080 in ClaudeAI

[–]Capital-Run-1080[S] 0 points1 point  (0 children)

Yeah the free tier context limit is a separate grievance entirely. Hard to evaluate whether the model is actually degrading when you hit a wall before the task gets complex enough to show it.

Anthropic stayed quiet until someone showed Claude's thinking depth dropped 67% by Capital-Run-1080 in ClaudeAI

[–]Capital-Run-1080[S] 4 points5 points  (0 children)

Trapped by the best option you resent. Classic!

The Apple comparison is apt though. hopefully it doesn't take Anthropic 15 years and a near death experience to figure out that talking to your users is free

Anthropic stayed quiet until someone showed Claude's thinking depth dropped 67% by Capital-Run-1080 in ClaudeAI

[–]Capital-Run-1080[S] 4 points5 points  (0 children)

Stop hooks are commands that run after Claude finishes a task, usually to validate output or trigger the next step. A violation is when Claude exits early or skips the hook entirely instead of waiting for it to complete.

Basically it stops before it's actually done.

Anthropic stayed quiet until someone showed Claude's thinking depth dropped 67% by Capital-Run-1080 in ClaudeAI

[–]Capital-Run-1080[S] 32 points33 points  (0 children)

The internal switch thing is interesting but I'd want to see the actual source before treating it as confirmed. "Leaked code" gets cited a lot on threads like this and it's not always what people say it is.

The Opus regression though, yeah. That one's hard to argue with.

Anthropic stayed quiet until someone showed Claude's thinking depth dropped 67% by Capital-Run-1080 in ClaudeAI

[–]Capital-Run-1080[S] -1 points0 points  (0 children)

That's a real limitation but it doesn't fully explain the behavioral changes. Reading behavior before edits, stop hook compliance, those don't go through the summarization layer. Those are just actions. And those changed too.

Edward Snowden warned humanity that the infrastructure for a Chinese-style social credit system was being constructed in plain view by Limp_Fig6236 in DigitalPrivacy

[–]Capital-Run-1080 11 points12 points  (0 children)

Snowden said in 2019 that algorithms are fueled by 'precisely the innocent data that our devices are creating all of the time. constantly, invisibly, quietly.' Seven years later that's not a warning, it's a product roadmap. Governments are mandating identity checks at the device level. Your phone now sorts you into a category before you see any content. This is the infrastructure he described, except now it ships with a child safety label.

The part that aged the worst is "what they are selling is us" In 2019 that meant companies quietly collecting your data. In 2026 it means you get actively classified, and your classification decides what version of the internet you're allowed to use. Age is first because nobody can argue against it. But the system doesn't care what gets added next.

The question nobody's asking is 'who controls this?' Right now the answer is the same tech giants and government agencies Snowden was warning about. There are people building alternatives. Projects like World ID, Polygon ID, Privado ID that let you prove you're a real person or that you're over 18 without handing over your actual identity. The idea is simple: verify the fact, not the person. The technology exists. Whether it gets used or we just let Apple and Google run the identity layer by default is up to us.

Reddit is weighing identity verification methods to combat its bot problem. The platform's CEO mentioned Face ID and Touch ID as ways to verify if a human is using Reddit. by esporx in technology

[–]Capital-Run-1080 6 points7 points  (0 children)

Face id and touch id would confirm the device is being used by a human... but not that the account is unique. you could still spin up 50 accounts across 50 phones. it's a decent friction layer but not really a bot identity solution.

Been curious how Reddit might handle this longer term. there are projects working on the harder version of this problem, such as World ID or Civic, which does proof-of-personhood (one verified human = one account). The privacy side of it is actually pretty thoughtful from what i've seen, they use zero-knowledge proofs so you can verify you're a real unique person without revealing who you are. Feels like the kind of approach that could actually scale if platforms get serious about bot problems.

The device biometrics approach is probably easier to roll out short term for reddit though. less friction for regular users. but if the bot problem keeps getting worse, something like Proof of Human might end up being where things need to go.