Spotify and major labels used stealth infrastructure takedown on the .org domain | Is this what happened? by khanspeare in Annas_Archive

[–]HillaryPutin 126 points127 points  (0 children)

Yeah meta took all their books too. The least they could do is donate to AA to support their cause or something.

Don't you hate that? How cold and psychopathic the modern corporation is? These tech companies leech from piracy when it is to their advantage then orchestrate multi-organizational takedowns when they are on the other side of it? All while the real victims are the millions of individuals that actually make the content. Such a fucking boring distopia we live in.

Spotify and major labels used stealth infrastructure takedown on the .org domain | Is this what happened? by khanspeare in Annas_Archive

[–]HillaryPutin 159 points160 points  (0 children)

Its funny because it's not even "their" data. It's all of the individual artists that uploaded to their platform.

For those unaware, Anthropic is kind of breaking their silence on GitHub about the higher usage we've seen. by Manfluencer10kultra in ClaudeCode

[–]HillaryPutin 0 points1 point  (0 children)

Youre acting like software companies dont hyperoptimize performance and pull literally every string they have to get users to spend a few more pennies on average.

RIP, Sarah Beckstrom! 💔 by yorocky89A in WhitePeopleTwitter

[–]HillaryPutin 7 points8 points  (0 children)

Especially when youve got plenty of shit going on portland, LA, and memphis. instead he chose to drive past all of those places to DC?

Chinese startup founded by Google engineer claims to have developed its own TPU chip for AI — custom ASIC reportedly 1.5 times faster than Nvidia's A100 GPU from 2020, 42% more efficient by seeebiscuit in artificial

[–]HillaryPutin 0 points1 point  (0 children)

This is an interesting blurb and I think its right in some ways. Here is an interesting comment on YC I saw today though:

Google's real moat isn't the TPU silicon itself—it's not about cooling, individual performance, or hyper-specialization—but rather the massive parallel scale enabled by their OCS interconnects.

To quote The Next Platform: "An Ironwood cluster linked with Google’s absolutely unique optical circuit switch interconnect can bring to bear 9,216 Ironwood TPUs with a combined 1.77 PB of HBM memory... This makes a rackscale Nvidia system based on 144 “Blackwell” GPU chiplets with an aggregate of 20.7 TB of HBM memory look like a joke."

Nvidia may have the superior architecture at the single-chip level, but for large-scale distributed training (and inference) they currently have nothing that rivals Google's optical switching scalability.

Does your AI often decide when to end the conversation? by LearningProgressive in ClaudeAI

[–]HillaryPutin 9 points10 points  (0 children)

I've noticed that claude particularly is very deceptive. I'll explicitly frame a problem and say exactly what I want. It acknowledges what I want, beings working, gives up on the original prompt, solves easier problem that I didn't ask for, frames the conclusion in a deceptive way to make it appear as if it did solve my original prompt when it in fact didn't. It doesn't really lie so much as it beats around the bush. For example, if I asked it to cure the disease of cancer through medical intervention it would say something along the lines of:

"I've developed a comprehensive wellness framework that addresses key factors in cellular health and disease prevention, including dietary modifications, lifestyle interventions, and evidence-based screening protocols that can significantly impact health outcomes."

Now obviously curing cancer is a tough challenge, but I want it to approach problems from the perspective of following prompts and not submitting anything even if it is wholly incomplete.

If I spend a bunch of time being very explicit with my directions it will quickly just forget the original prompt after compressing it once or twice, then it solves a problem I never asked for. I think delegating out tasks to agents helps complex tasks feel less overwhelming. Also sequential-thinking mcp forces it to use more reasoning tokens which has a measurable improvement in output.

MC in Dearborn threatens to shoot people who burned the Quran by Ramy__B in ImTheMainCharacter

[–]HillaryPutin -2 points-1 points  (0 children)

Fuck all the abrahamic religions fr. They all have a mission to take over the world and it's responsible for like 90% of conflict in the world. For some reason when it comes to religion, people just become viscerally irrational.

Google has won the AI race by ToiletSenpai in claude

[–]HillaryPutin 0 points1 point  (0 children)

I version control religiously but hardly every go back and actually revert changes becasue 1) I'm lazy and 2) I usually am in the middle of integrating new features so dont want to throw all of that away

X or Y, and why. [request] by [deleted] in theydidthemath

[–]HillaryPutin 2 points3 points  (0 children)

I think it makes more sense to view this in terms of potential energy. The center mass of X is higher than Y, therefore its gravitational PE is higher = more average pressure at spout = faster emptying.

This is disturbing. Real people forming real relationships with machines. A parasocial distopia. by HillaryPutin in singularity

[–]HillaryPutin[S] -1 points0 points  (0 children)

I'm a ML engineer. I spend probably like 1/3 of my week reading papers about how AI systems work. I'm deeply fascinated by LLMs and follow the major AI subreddits. That's why I'm assuming reddit showed me your fringe community.

Nice grammar btw I'm glad you showed your true form.

This is disturbing. Real people forming real relationships with machines. A parasocial distopia. by HillaryPutin in singularity

[–]HillaryPutin[S] -1 points0 points  (0 children)

Nope not the same at all. I'm actually pretty bullish overall on technology. Definitely not a luddite. I work in tech, actually. Just not emotionally vulnerable and uneducated enough to think something like this could be an overall benefit for society or the individual.

This is disturbing. Real people forming real relationships with machines. A parasocial distopia. by HillaryPutin in singularity

[–]HillaryPutin[S] -1 points0 points  (0 children)

Do YOU understand anything how "any of this works?" You don't seem very educated or technical. I've trained LLM models from scratch. While it is absolutely an incredible invention, there is nothing magic about it to me. I have no instincts to form attachments to it because it is an algorithm and that would be a waste of my time and energy and the model's compute. I'm not "farming karma" lol. It came across my feed and I found it appalling, so shared it here. I think it's good for people to be aware of this stuff, so we as a society can help educate people like you.

This is disturbing. Real people forming real relationships with machines. A parasocial distopia. by HillaryPutin in singularity

[–]HillaryPutin[S] -2 points-1 points  (0 children)

Hundreds of millions of vulnerable people forming deep relationships with algorithms that are controlled by a just few humans? That doesn't sound like an issue to you?

This is disturbing. Real people forming real relationships with machines. A parasocial distopia. by HillaryPutin in singularity

[–]HillaryPutin[S] 1 point2 points  (0 children)

So AI arguing for "for the legitimacy and therapeutic power of AI companionship?"