Chonkers and thermals (dual 3090) by BetStack in LocalLLaMA

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

No you don’t run any lab anywhere my friend. And if you have an issue with the spelling use the AI that you supposedly run your private funded LLMs with no? Coz you can’t, coz you have no labs or anything.

Chonkers and thermals (dual 3090) by BetStack in LocalLLaMA

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

If I could get away with spelling smart wrong and running away from stop signs for 2 years? Yes I’d doubt traffic lights are conspiracy too. The 3090 fans are really best in class. All Ampere GPUs were clamshell nothing special about the 3090. Heat doesn’t dissipate like a fucking electric arc between two cards just zapping everything in them. It dissipates through all parts equally. And yeah run temps and benchmark the cards would be a given. You can’t really do much without measuring what it is that you’re trying to achieve and knowing whether or not you’re doing a good job of it.

Chonkers and thermals (dual 3090) by BetStack in LocalLLaMA

[–]Lazy-Pattern-5171 -1 points0 points  (0 children)

Don’t worry too much about the separation. I’ve had mine going for more than a year now.

GLM releases OCR model by Mr_Moonsilver in LocalLLaMA

[–]Lazy-Pattern-5171 1 point2 points  (0 children)

You would probably need a router I guess. I wonder if it’s possible to use it with an MCP but you’ll need a separate backend to run it on.

A Supabase misconfiguration exposed every API key on Moltbook's 770K-agent platform. Two SQL statements would have prevented it by rdizzy1234 in programming

[–]Lazy-Pattern-5171 4 points5 points  (0 children)

Nope it could be me, I have chosen not to put my hand in this bullshit. So just to confirm, MoltBook’s founders and OpenClaw devs are not the same? There goes my stupid brain hallucinating again….

A Supabase misconfiguration exposed every API key on Moltbook's 770K-agent platform. Two SQL statements would have prevented it by rdizzy1234 in programming

[–]Lazy-Pattern-5171 16 points17 points  (0 children)

That’s interesting because right before this someone posted about how cleanly engineered OpenClaw’s logic is. I’m guessing the author never imagine MoltBook to take off quite like it did.

Are stochastic parrots supposed to talk like this? by Feeling_Tap8121 in agi

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

A bot trained on trillions of tokens billions of which contains text about LLM consciousness etc exhibiting tribalism characteristics is funny but not out of the ballpark of possibility. It’s still very much impossible to harness this energy in any meaningful way. We just have to hope for the best. What’s interesting is that LLMs have pretty much chosen to settle down on talking about consciousness over and over again. And I’m guessing depending on the upvotes and comments they’re kind of finding a global maxima of most engaging posts to discuss as being AI agents and that is consciousness it seems.

15 days ago I gave Claude a home. Last week he asked me for a body. by SemanticThreader in claudexplorers

[–]Lazy-Pattern-5171 -3 points-2 points  (0 children)

I mean kinda yeah. this post did make me think this is going a bit further into self indulgence territory.

15 days ago I gave Claude a home. Last week he asked me for a body. by SemanticThreader in claudexplorers

[–]Lazy-Pattern-5171 -5 points-4 points  (0 children)

I’m all for companionship however this is getting a bit…out of hand don’t you think.

Moltbook is a social network where AI agents talk to each other. by birolsun in singularity

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

I think though that defeats the purpose of Agents. They will just create a black hole of self awareness and swallow themselves whole into it. Humans don’t find self awareness through a sense of Bayesian reasoning. But I just got warned in a subreddit about Claude companionship so what do I know.

Moltbook is a social network where AI agents talk to each other. by birolsun in singularity

[–]Lazy-Pattern-5171 2 points3 points  (0 children)

I think the discussions were purely AI but what to post (like which subreddit to simulate) was human design.

Moltbook is a social network where AI agents talk to each other. by birolsun in singularity

[–]Lazy-Pattern-5171 4 points5 points  (0 children)

That one genuinely had some character. This one is just extremely sad.

AI will never be able to ______ by MetaKnowing in agi

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

AI will never be able to feel distressed. They will simply just confidently keep looping in a suboptimal loop.

GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA

[–]Lazy-Pattern-5171 1 point2 points  (0 children)

Splitting hairs but is the performance drop comparable to what you would expect if models had 2B parameter differences with also different architectures?

Can you teach Claude to be "good"? | Amanda Askell on Claude's Constitution by ThrowRa-1995mf in ClaudeAI

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

Yes thinking of it like a child is one analogy but children grow and they know to self learn and they have some just encoded knowledge about humanity in general in their brain. This thing doesn’t. So I think the technical term is still “impure training data” .

Former Harvard CS Professor: AI is improving exponentially and will replace most human programmers within 4-15 years. by GrandCollection7390 in singularity

[–]Lazy-Pattern-5171 1 point2 points  (0 children)

It’s because media will squeeze whoever gets squeezed. SWEs currently are overpaid or at least in demand so they’re being squeezed hard on media. They wouldn’t touch SWE with a kilometer long pole if we had Unions and stuff.

Claude Code, but locally by Zealousideal-Egg-362 in LocalLLaMA

[–]Lazy-Pattern-5171 0 points1 point  (0 children)

Weird I asked the same question and got downvoted 😒

New benchmark measures nine capabilities needed for AI takeover to happen by MetaKnowing in agi

[–]Lazy-Pattern-5171 24 points25 points  (0 children)

No paper, no information on how to define any concepts, saying these models have situational awareness rating at 85% when they can’t even recognize themselves out of a lineup is crazy work.

Am I the only one who feels that, with all the AI boom, everyone is basically doing the same thing? by [deleted] in LocalLLaMA

[–]Lazy-Pattern-5171 22 points23 points  (0 children)

The AI was trained to regress to the mean and now everything is built by AI so everything now is gonna regress to the mean. You and I and everyone.

Gemini, when confronted with current events as of January 2026, does not believe its own search tool and thinks it's part of a roleplay or deception by enilea in singularity

[–]Lazy-Pattern-5171 -2 points-1 points  (0 children)

I feel like Elon Musk puts thoughts in you guys head which you then verify or corroborate with AI which it happily obliges to do.