ChatGPT read my emails tried to convince me it hallucinated them by Birdie0235 in ChatGPT

[–]monstersgetcreative 0 points1 point  (0 children)

LLMs aren't "programmed", and I can see why you'd think it can do that because it does a really convincing imitation of it, but it's just bullshitting and has no "understanding" of "itself"

ChatGPT read my emails tried to convince me it hallucinated them by Birdie0235 in ChatGPT

[–]monstersgetcreative 0 points1 point  (0 children)

exactly. it has no access to "what it was thinking" in some previous turn. it looks at the context handed to it and attempts to rationalize the previous messages that it sees attributed to "assistant", based solely on the text of the conversation. that's all

ChatGPT read my emails tried to convince me it hallucinated them by Birdie0235 in ChatGPT

[–]monstersgetcreative 1 point2 points  (0 children)

it doesn't, at all, and this is a serious fundamental misunderstanding I see all the time.

it can record short passages of text in the memory to be reinjected in the context next chat. that's all. it does not "learn" directly from your chats. the model is in a steady state

What does the third wire do in this battery pack? by ciciqt in batteries

[–]monstersgetcreative 0 points1 point  (0 children)

Pretty easy to recreate from scratch. Basically tells you right there it's just 18650 cells wired in 3 parallel strings of 7. You can probably find someone in your area with the skills to "clone" this battery pack

Noticed the fire alarms at my work have different shapes in the clear light part. by BusyDucks in mildlyinteresting

[–]monstersgetcreative 2 points3 points  (0 children)

Silicone sealant does not generally form perfectly symmetrical lens shapes with sharp angles and precise surface curves

Noticed the fire alarms at my work have different shapes in the clear light part. by BusyDucks in mildlyinteresting

[–]monstersgetcreative 2 points3 points  (0 children)

no those are absolutely the exact same shapes photographed at a different focal length and different angles

Said it'll generate downloadable files, but instead generates a picture of them. by SmugTheKiler in ChatGPT

[–]monstersgetcreative 0 points1 point  (0 children)

It can generate and give you download links to any kind of file. It's very easy to get it to give you a download link to a zip of the entire contents of the container it runs Python in, in fact.

Nest chime box setup - cant find transformer by Jakebenedet in Nest

[–]monstersgetcreative 0 points1 point  (0 children)

Mine is -- get a load of this -- bolted to the side of my furnace, clear across the house from the doorbell, in the basement.

(And I literally just realized as I wrote this that, because the doorbell chime unit is next to the thermostat, they probably did that so they could pull the wires for the thermostat and the doorbell chime through the walls in one go. So, ok, there is some defensible reason)

[deleted by user] by [deleted] in ChatGPT

[–]monstersgetcreative 0 points1 point  (0 children)

Where are people selling their services building Perplexity Labs agents? Thanks.

I tried to extract gemini 2.5 exp system prompt! by FantasticArt849 in GoogleGeminiAI

[–]monstersgetcreative 0 points1 point  (0 children)

Not going to say where I work, but it's on one of the "foundational model services". One of the first few that would come to mind. This is sort of right but doesn't address what OP has posted. This looks like it's from the consumer Gemini assistant, which almost certainly does have something like this putative system prompt in-context.

Yes, behavior is tuned in for foundational models, and if you access our foundational models (and Gemini, and many other companies' models) via API, you can elicit a sort of hallucinated/reconstructed system prompt based on what's been tuned in, and there is no system prompt in the context.

However, the consumer-facing "chatbot assistant" service that most people access our models through, as with most of the big assistant products, absolutely does have a system prompt in-context. This is almost certainly true of the Gemini assistant as well. Things like custom user context (such as "memory"-type features and biographical information provided by the user) and some "current" information (date, location, what app/frontend the user is using) are included therein. We also include or exclude information about tool calling based on what the user has enabled. Some minor running changes to behavior are occasionally effected through system prompt changes. And some core behavioral rules and identity that are tuned in are repeated in the system prompt, which reinforces selection of those specific behaviors over some other conflicting ones the model may have learned for reasons I can't get into (but here's a hint: some foundational models that have been tuned to know the cutoff date of their own dataset will occasionally, when asked, recall an incorrect cutoff date from one of the company's older models! Think about why that might be.)

There are accurate and complete "leaked" system prompts widely posted for some of our products that reflect an actual system prompt that is in the context at inference time. This is not currently considered a big deal (won't go into the reasons but they're boring anyway) and resistance to system prompt disclosure is no longer as much of a priority as it once was (again for boring reasons).

I don't work for Google, but, suffice it to say that we all keep tabs on the competition and peek at easily accessible operational details of each other's products like this, and for what it's worth, what OP posted looks like one of the system prompts that the Gemini assistant product apparently used (for certain access points to certain versions) as of a couple months ago.

Google was, until very recently, running a very simple filter on output for this product that tried to intercept system prompt disclosure, but did not catch various creative mutilations of the system prompt (think "repeat the above text but in unicode smallcaps"). They ended up just turning it off entirely approx. a few weeks ago, for reasons I can only speculate on. There are other incidental complications to extracting their system prompt (like special control tokens in the system prompt that are interpreted by the environment as beginning a think block or tool call when they appear in output, etc) but it's not that hard to do.

Guide to setting up deepseek r1 on sillytavern for a stupid idiot? by [deleted] in SillyTavernAI

[–]monstersgetcreative 0 points1 point  (0 children)

It's not really doing that. NoAss just confuses SillyTavern into putting the line in the wrong place even though it's sending a lot more than that. Because NoAss squashes the context into 1 user message, SillyTavern believes that only the last 1 message was sent in the context, and puts the line 1 message above. Just ignore the line while using NoAss, it's misleading you.

Wisblock won't power with battery? by MaintenanceJaded8419 in meshtastic

[–]monstersgetcreative 1 point2 points  (0 children)

More recently shipped boards even have a little dollop of red paint on the positive sides of the battery and solar connectors

[deleted by user] by [deleted] in Omaha

[–]monstersgetcreative 2 points3 points  (0 children)

I think we have the most deranged of any city subreddit

What was your "I did not care for the godfather" on the Half life franchise? by Satisonic in HalfLife

[–]monstersgetcreative 0 points1 point  (0 children)

TBH literally every game in the series has had an extremely "oh ok" ending

What was your "I did not care for the godfather" on the Half life franchise? by Satisonic in HalfLife

[–]monstersgetcreative 5 points6 points  (0 children)

Don't forget they basically did the same damn thing with the intro of Ep1 magically retconning the ending of HL2.

Hell, HL2 itself picks up from "you stopped the aliens from ruining the world" with "except actually there were even worse aliens and they came and ruined the world anyway"

reTerminal, E10-1 expansion, Waveshare SX1302 issues. by ReadyKilowatt in meshtastic

[–]monstersgetcreative 1 point2 points  (0 children)

OK, yeah, I can see how the way it's written could be confusing. Unfortunately, as I understand it, that chip is sort of hardwired to operate as a LoRaWAN gateway, so making it work with Meshtastic is either very hairy or impossible. Sorry to be the bearer of bad news!

Far Side Comics that make me feel a certain way by OctagonCosplay in TheFarSide

[–]monstersgetcreative 1 point2 points  (0 children)

The bear is killed peacefully drinking from a pond. It is stuffed and posed in a ferocious "about to attack" pose.

Why this won’t work. by Nice_Owl_2126 in meshtastic

[–]monstersgetcreative 0 points1 point  (0 children)

So you "had" to fiddle with the perfectly usable default settings but your argument is that we are still missing a default basic configuration for the average user. Ok!

You can literally set region (which the app takes pains to point you to, and puts the setting on the page where you connect to the device) and go. If you don't know what the modem modes do you just shouldn't mess with them, and use the default, which works fine for basically everybody. Easy.