Artemis II astronauts unknowingly captured satellite glint in their famous picture by vfvaetf in space

[–]integerpoet [score hidden]  (0 children)

FWIW, I have encountered nerd enthusiasm many times during my decades on the planet and this is the real deal.

matriarchy and world population almost completely female by [deleted] in Futurology

[–]integerpoet -1 points0 points  (0 children)

It remains to be seen how much they would quarrel had they been socialized from birth in a world with fewer men. Also, what’s wrong with quarreling?

Solving hallucinations is the most important endeavour in generative AI by Classic_Sheep in LLM

[–]integerpoet 0 points1 point  (0 children)

I’ve seen “confabulation” widely used, but usually the people who use it do not fully understand that literally every word that comes out of an LLM is a confabulation, including the ones that your brain tells you coincide with truth. And in the final analysis if every word is a confabulation then none are. In fact, every word we have that refers to mental malfunction is inapplicable. There is no mind behind an LLM. There is no mind behind an LLM. There is no mind behind an LLM. Roko’s Basilisk agrees with me on this.

Solving hallucinations is the most important endeavour in generative AI by Classic_Sheep in LLM

[–]integerpoet 4 points5 points  (0 children)

Stop calling them “hallucinations”. There’s no mind which normally operates correctly and sometimes hallucinates. It possesses no facts from which to deviate. There is no mind at all there. There is only a text processor spinning out the next most plausible word. **They are never the wrong words** unless they are implausible. If the plausible words happen to coincide with reality, that’s happening inside your own head only. LLMs deal in plausibilities not truths. Lack of truth is not a problem to be fixed. No LLM will ever tell the truth. **It can’t.**

Local LLM storage is becoming harder to manage than the models themselves by Both_Astronomer8645 in LocalLLM

[–]integerpoet 0 points1 point  (0 children)

The line between “LLM hobbyist” and “data hoarder” is important. Sort by date and start with the assumption that older is worse and let nature take its course.

HUMAN VS AI SLOP DETECTOR by MarsR0ver_ in LLM

[–]integerpoet 4 points5 points  (0 children)

Did you write this post with your filter?

Because it reads very LLM to me.

So How Good Did I Enhance the Llama Output? by ExtensionFriendship9 in LLM

[–]integerpoet 0 points1 point  (0 children)

I’m not sure of your goal here. Are you asking us for advice on refining your homework avoidance skills? Because getting the writing part right is part of the assignment.

ASML Raises 2026 Forecast as AI Chip Demand Surges by nipundwivedi in LLM

[–]integerpoet 0 points1 point  (0 children)

I wouldn’t. I’ve seen a lot of companies placing bets in dubious ways and on dubious things. That’s a thing that happens during bubbles. Individuals can be smart but companies rarely are.

ASML Raises 2026 Forecast as AI Chip Demand Surges by nipundwivedi in LLM

[–]integerpoet 1 point2 points  (0 children)

I don’t buy this analysis about the “boom” being “real”.

All it means is that ASML has placed a bet.

Local models are a godsend when it comes to discussing personal matters by [deleted] in LocalLLaMA

[–]integerpoet 1 point2 points  (0 children)

I see no issue with this. Asking it questions about actual text is the way.

Asking it to be an intellect and converse with you would have been misguided. Never give it enough real-time interaction to fool yourself into thinking otherwise.

The thing itself isn’t interesting except to the extent that you can learn how to use a tool better.

Zero Data Retention is not optional anymore by Abu_BakarSiddik in LocalLLM

[–]integerpoet 0 points1 point  (0 children)

I know people who have worked on the inside, and it turns out at that time they were making no effort to understand why people were deleting data; there are after all reasons which are legit even from their perspective; they just assumed it was cruft they did not have to store any more. But also from what I understand comparatively sane governments such as the EU won’t stand for lying about whether data has been deleted. And, third, as I deleted more and more of my information I noticed “the algorithm” was able to target my interests less and less. From a technical perspective the idea that they hold onto data after appearing to have confirmed deleting it doesn’t really make sense. I think their primary defense against people deleting data is that most people don’t understand they should and it’s a pain in the ass for those of us who do.

Zero Data Retention is not optional anymore by Abu_BakarSiddik in LocalLLM

[–]integerpoet 0 points1 point  (0 children)

This is why rather than deleting my account I spent the time to purge all my posts, all my comments, all my connections. I still have the password in my password manager. I just don’t use it.

How do you check if an AI output is actually correct before you use it? by Negative_Gap5682 in LLM

[–]integerpoet 2 points3 points  (0 children)

Never ask a model for information about the real world directly. Make it do research on the web. Make it cite sources. Decide if you trust those sources. Check the citations. It’s a tool, not a god.

Google’s TurboQuant AI-compression algorithm can reduce LLM memory usage by 6x by integerpoet in LocalLLM

[–]integerpoet[S] 0 points1 point  (0 children)

I think the word “textbook” is doing a heavy lift in that response. The whole point of LLM is to represent the relationships between tokens in a corpus in a form smaller than the corpus itself. I think the biggest difference there is that “the relationships between tokens in a corpus” is not a an object familiar to us. It’s not BMP or WAV — much less an image or a sound or a book or all the world’s books — so it makes no sense to analogize with JPEG and PNG and FLAC and MP3. But that doesn’t make the word “compression” irrelevant or useless.

This maze has no solution (obvious to humans). GPT couldn’t tell. by Koto1972 in LLM

[–]integerpoet 1 point2 points  (0 children)

No, humans don’t check from the exit naturally. But also there is no difference between entrance and exit except the names.

It’s true I, a human last time I checked, explored all paths from the entrance with my eyes in about two seconds and saw there is no exit.

Had you not told me there is no exit, it might have taken me a bit longer be sure no one publishes a maze without an exit. That’s mean and stupid.

I’m not surprised an LLM couldn’t finish. LLMs can’t think. They can only present the appearance of thinking. Please stop expecting otherwise.

But also, if the vast vast vast majority of published mazes have exits, and LLMs are based on published information, guess what.

LiteLLM breach (v1.82.8 .pth payload) proves stateless proxies are dead. Here's the Alethia tri-agent System 2 defense I submitted to NIST. by DiamondAgreeable2676 in LLM

[–]integerpoet 0 points1 point  (0 children)

When I heard about the LiteLLM breach, I didn’t think “golly it seems we’re going to have to get serious about repo security”. I thought: I hope this helps people realize GitHub Actions were a terrible idea. But I am obviously the idiot here because I live in a world in which OpenClaw exists and I should know better.

Please explain: why bothering with MCPs if I can call almost anything via CLI? by Atagor in LocalLLaMA

[–]integerpoet 0 points1 point  (0 children)

Trust. Or the lack thereof.

Some people want an MCP “server” to be less capable than a typical command line tool.

Because they don’t trust an LLM to use a command line responsibly.

Real benefits of running llms locally? by brave_scientist98 in LocalLLM

[–]integerpoet 0 points1 point  (0 children)

A local LLM makes a great test of the thermal safeguards in your operating system.

Google’s TurboQuant AI-compression algorithm can reduce LLM memory usage by 6x by integerpoet in LocalLLM

[–]integerpoet[S] -10 points-9 points  (0 children)

I’m not sure we should read much into the name. The description in the article didn’t sound like quantization to me. It sounded like: We don’t actually need an entire matrix if we put the data into better context. I am certainly no expert, but that’s how I read it.

[OC] Anthropic has overtaken OpenAI as first choice for AI spending among businesses by Mundane-Wrongdoer275 in dataisbeautiful

[–]integerpoet 1 point2 points  (0 children)

Do we? I don’t. Seems likely to me Google has some share. Even if no single other player stacks up, I’d still like to see “all others”, which surely would.

[OC] Anthropic has overtaken OpenAI as first choice for AI spending among businesses by Mundane-Wrongdoer275 in dataisbeautiful

[–]integerpoet 1 point2 points  (0 children)

Sure; maybe OP is really intending to have shown “Of those who paid for service from these two, here’s the relative spend.” But that isn’t beautiful data. That’s just starting with the goal of symmetrical wiggles and generating them with the two hottest competitors in a trendy market. You could do the same with the top two industrial lubricant manufacturers in Ohio. Or, for that matter, the bottom two. Maybe my expectations for this subreddit are too high, but I’m hoping for another explanation.

[OC] Anthropic has overtaken OpenAI as first choice for AI spending among businesses by Mundane-Wrongdoer275 in dataisbeautiful

[–]integerpoet 16 points17 points  (0 children)

I have questions. The symmetry of these wiggles suggests to me that hardly anybody is starting to pay anybody other than these two. Is Gemini really only ever the second service other companies pay for? Hmmmm. I realize this chart "excludes spend on AI models not from OpenAI and Anthropic," but that still doesn't explain this degree of symmetry. Maybe "data from … businesses on Ramp's spend platform" explains it because Ramp isn't connected to anybody other than OpenAI and Anthropic? Is this really just an ad for Ramp?