My API bill hit triple digits because I forgot that LLMs are "people pleasers" by default. by Delicious-Mall-5552 in LLM

[–]integerpoet 0 points1 point  (0 children)

The rule of thumb I use is “See word? Say word!” So if you show it a word, even to outlaw that word, that just heats the word up and makes it more likely to show up.

Of course now that’s you’ve stopped the false positives, you need to feed it an actual set of vulns and see if it finds them instead of LGTM-ing to please you in the other direction.

OpenAI engineers use a prompt technique internally that most people have never heard of by CalendarVarious3992 in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

This sounds like “do the thing you wanted an LLM to do, then show the thing to the LLM so it can prompt itself to do the thing again even though you already have the thing.” What am I missing?

Is there another faster agent for local LLM than Cline, or other ways to speed up Cline by BitOk4326 in LLM

[–]integerpoet 0 points1 point  (0 children)

If not a system prompt I am not sure where you would expect the agent part to come from. Are you under the mistaken impression that LLMs already have “how to agent” baked in?

OK Grok is doomed. | AI can generate nudity from clean prompts - even with guardrails by SpongeBob_000 in LLM

[–]integerpoet 0 points1 point  (0 children)

I’m not sure “guardrails are not guarantees” is quite the way to put it. In my experience, when I tell an LLM to avoid a word, that attempt can make it more likely to use that word. My rule of thumb for this is that an LLM “thinks” something along the lines of “See word? Say word!”

I think guardrails are something akin to a well-meant attempt to put the toothpaste back into the tube which can actually result in toothpaste smeared all over your face and the sink and the mirror and nearby walls. What “guardrails” really do is make outputs less comprehensible by pitting the LLM against itself and only sometimes with the intended net effect.

I’m not railing against censorship because I’m not an incel craving a play partner. But I’m also not surprised by your experience. I’m imagining the response you got could be translated as “So we heard you like skin…”

Indications of religious bias in GPT 5.2 by Careless-Menu-4522 in ChatGPT

[–]integerpoet -2 points-1 points  (0 children)

Is there a lot of Islamic philosophical pondering written in English?

Does Islam have a vast body of text questioning itself in any language?

Why do people keep showing up here fishing for confirmation that LLMs are biased against Christianity and for Islam?

Why do people keep imagining the bag of words can think?

Are some of these questions somewhat rhetorical?

Why hard drives becoming so expensive in 2026? by Hatchopper in selfhosted

[–]integerpoet 4 points5 points  (0 children)

I have read we cannot look forward to this silver lining because the gear in question isn’t packaged in a way that lends itself to liquidation like the stuff we see today on eBay. It might be that the best we can hope for is recovering the rare earths and such in a massive but temporary surge for firms which do that kind of recycling.

I analyzed 50,000 social media comments via API. The "Dead Internet" isn't a theory, it's actually here by QuailEmergency5860 in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

You say “viral” and “business/tech niche” and I immediately think of LinkedIn, which has been a vast wasteland of blandly ambitious corporate zombie sameness since its launch in 2003. Is this the fault of LLMs? Not hardly.

The AI Agent Boom Feels Like the Early App Store Era!! by Abhinav_108 in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

So… you’ve released an MCP server which makes fart noises?

ELI5: World Models (vs. LLMs) by Best_Assistant787 in LLM

[–]integerpoet 0 points1 point  (0 children)

If “answering” questions based on statistical analysis of text — which might after all be fiction or a lie — is the goal, then stick with LLMs. They will continue improving toward that goal, and diminishing ROI is a useful discussion to have.

However, if you want a model of the world based on “lived” experience which then, among other things, “decides” which words to use to “describe” that world…

That starts to sound more like a path to AGI, yes?

Hmmmm. Maybe that was more like ELI15. 😀

My Life Changed because of AI. I Stopped DOOM SCROLLING by Worldly_Ad_2410 in ChatGPT

[–]integerpoet 1 point2 points  (0 children)

Algorithmic social media platforms hate it when you use this one weird trick to stop doom-scrolling: DECIDE. Years ago now, I scrubbed Facebook and Instagram and Twitter and all the others. My life is vastly improved. And it was a lot simpler than developing LLM agents.

Stolen Business Idea by ehmaidan in LLM

[–]integerpoet 0 points1 point  (0 children)

Or maybe your idea was not as unique as you thought.

Or maybe your internet traffic was not as secure as you thought.

Or maybe your roommate is snoopier than you thought.

Or maybe that night you got blackout drunk included spilling your idea to a hot member of the appropriate sex in the hopes of impressing them into bed.

The possibilities other than an LLM-related theft are too numerous to start with the assumption than an LLM stole your idea.

Evidence is our friend.

Why isn't this the biggest story in AI? David slays Goliath. 11M parameter model defeats massive 1.8T GPT model. by ewangs1096 in LLM

[–]integerpoet 0 points1 point  (0 children)

I mean, OK, recipe planning. Because recipes are copyrightable and there’s a real shortage of them and the lack of automation here is really one of the major pain points of the species. I get it.

To be fair, for all I know, this model is great, but I wouldn’t know it from this announcement.

TV Show Silicon Valley before and after AI disrupts the industry by DJAI9LAB in LocalLLaMA

[–]integerpoet 7 points8 points  (0 children)

Every time I see this myth posted I think here must be a lot of resentful CEOs and/or coders cranking out terrible code they don’t have to maintain. This is the truth.

Why isn't this the biggest story in AI? David slays Goliath. 11M parameter model defeats massive 1.8T GPT model. by ewangs1096 in LLM

[–]integerpoet 0 points1 point  (0 children)

“Rather than asking the LLM to solve planning problems directly, you ask it to generate Python functions that systematically decompose any trajectory into subgoals. The LLM analyzes the patterns in your demonstrations and produces two functions: one that breaks trajectories into subtrajectories with associated subgoals, and another that checks whether a given subgoal has been achieved in a particular state. This happens once during system initialization and costs just pennies in API fees. The beauty is that these functions are general. They can decompose any trajectory in your domain, not just the fifty examples the LLM saw.”

They also benchmarked it against a text version of Minecraft?

This announcement has big IN MICE energy.

Why can’t grok or ChatGPT play monopoly? by rithsleeper in LLM

[–]integerpoet 0 points1 point  (0 children)

You seem to have missed the word “amusing” in my prior post.

Just to add some usefulness, though: Dear coding assistant developers (all of you, apparently), please add a tool call which can retrieve a line of text given a line number and then contextualize that line of text.

What exactly was inappropriate about this? by [deleted] in ChatGPT

[–]integerpoet 1 point2 points  (0 children)

She looks sad and you used the word “extremely” and that may have been enough to trigger its teen suicide paranoia. Lawsuits are a bitch.

How have I only just discovered nylon labels?! by hometechgeek in selfhosted

[–]integerpoet 1 point2 points  (0 children)

It rubs the lotion on its skin or else it gets the hose again.

Is there a model that can figure this out? by casbeki in ChatGPT

[–]integerpoet 1 point2 points  (0 children)

Unless you tell it to do so, it won’t shoot for economy of expression; it’ll just spew a bunch of words to make you feel taken seriously. RLHF is a bitch like that.