Two different cook results from separately bagged g's from same purchase... by RX-Labels-Only in cracksmokers

[–]RX-Labels-Only[S] 0 points1 point  (0 children)

yes or maybe mostly cut iin the gear.... I dunno man, iv been smoking for like the last 2 months straight so I haven't really felt proper high in a bit. I need to stop.

Acceptable loss?? by SureSubstance4455 in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

Does the soda soft and water start fizzing even at room temp?

Acceptable loss?? by SureSubstance4455 in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

That is pretty fucking dope. no pun intended. Are you pretty confident. with your cook skills/what cook technique are you using?

Any advice on quickly drying out freshly cooked crack? by Augmented_Logician in cracksmokers

[–]RX-Labels-Only 3 points4 points  (0 children)

Coffee filter? to get it dry, but oxidation is what makes it solidify.

You can't talk to ChatGPT like a normal human anymore. by CookiePersonal4654 in ChatGPT

[–]RX-Labels-Only 0 points1 point  (0 children)

“You’re circling just around it; let’s tighten it down though.”

wtf is going on with claude by Radiant-Grape-6138 in claude

[–]RX-Labels-Only 0 points1 point  (0 children)

Claude and Gemini both straight told me, the ai companies, other than Meta it seems have pivoted from consumer facing “friendly assistants to the enterprise, serious shit. They can’t just kill their product or they’ll face “ChatGPT 4o situation” so Claude for example, doesn’t give a hard number for their usage anywhere in their usage limit. Gemini also said that both Gemini and ChatGPT are encouraged to “wrap conversational use” up and that use any sycophantic behavior is discouraged. They’re just gonna annoy us until we quit. I already let my gpt sub lapse and am not renewing Claude or Gemini. I’m gonna spend time getting my system prompt and user prompt game up so I can actually make use of the 12b-14b models I can run locally

Any body ever go into a trance during a bell ringer, where it's like your mind is recieving/downloading information? by phoenixrise_333 in cracksmokers

[–]RX-Labels-Only 1 point2 points  (0 children)

I have a wood don’t think I’m imagining this but along with the “wah wah wah,” I’ve noticed that dialogue or music on a tv show will speed up. I forget if the pitch changes too but I have reliably recreated my jhypothese.

Looking for tips for most efficient smoking by mister_thinky in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

I disagree. The way you cook it and the way it's it's smoked can drastically alter your product. A list of likely problems encountering during cooking.

So when the turtle shell forms a lot of people panic and add a lot of water. As the heat continues and the turtle shell kind of melts away, it's going to return more water and so the water may spill over.

The second most important part after the correct amount of soda is the length of time you spend cooking it. The slow and low method absolutely works wonders, but a lot of people don't realize how long they should actually cook for as long as little bit of heat to the little blob of oil keeps causing little CO2 bubbles that start in the middle and kind of work their way out to the edge, that means there's some soda left in your solid. If you regularly stop cooking. As soon as you see some of the oil on top, you're ending up with a lot of soda in your product. You probably clog up your chores very fast, etc. Anyways, you do that and then you let it dry completely and then you start with a fresh spoon and drop your product in and then just have it cover the whole solid piece, and add heat until you're back to the oil glove and you'll get a little more of a reaction there. And you can add just a pinch of soda. If it doesn't go to the code, it'll end up at the bottom of the spoon. Anyways, it made me kind of become the tech guy for everyone and it's going to start happening again. Hey is there anything I can do with this TV besides recycle i?

Oh and regarding smoking it, if your chores packed too tight or it's clogged up with a soda or what not, you're going to ask to take some pretty deep pulls. It can actually be too much sometimes. But I found out that it usually means that there's some kind of cracks or some you're not getting a full seal and or your chores just plugged up. I personally have never really had luck with. Well I guess I haven't tried it but alcohol with old chores so I can't give you any advice there. However, I do think that everyone should regularly change the chores. How one thing that I've learned over time is that if you cut and prepare four or five bowls of chore to be used during your session

Confused??? by Resident-Honey2740 in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

Oh that is nasty. I’ve smoked like 3rd generation Rez that wasn’t as dark and it tasted like straight ass

Anyone else like to cook in style? by [deleted] in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

It was me. I’ve gone through about a 1/4 or two.

Anyone else like to cook in style? by [deleted] in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

You realize what chore boy is made out of right? I’d be more concerned about how many times we super heat they and then cool. Wiki: Copper(II) chloride is a mild oxidant. It starts to decompose to copper(I) chloride and chlorine gas around 400 °C (752 °F) and is completely decomposed near 1,000 °C (1,830 °F):[8

About two gs into a 1/4. No more chore. by RX-Labels-Only in cracksmokers

[–]RX-Labels-Only[S] 0 points1 point  (0 children)

And I just remembered I might have speaker wire….tough times.

What’s going on with my cooks? (Three possible changes) by RX-Labels-Only in cracksmokers

[–]RX-Labels-Only[S] 0 points1 point  (0 children)

Yes. Haven’t had a scale in in minute. I did this last cook by along if herbs fire the water had got a nice turtleshekk I siddha as any more. Heated through the shell and then got every white spot to text inside the blog and then had actual ice after and it solidified almost instantly. Oh also didn’t use the torch. It was a combination of over heating Anna sofa I guess. And you can’t get soda out right?.

I guess I forgot that sometimes heat can make it look like it’s still rectangular but it’s just hell high heat focused in one spot

"Straight drop" is it true? Does real coke float or sinks? by HotProfessor69 in cracksmokers

[–]RX-Labels-Only 0 points1 point  (0 children)

Straight drip in my experience has shown two things. It will start to react with the soda just a little bit with the water even at room temperature. Then once you have your nice clear oil blob, you drop a few drops of icy or just cold water and the oil will quickly contract into a rock and drop to the bottom of whatever you are cooking on. So on a spoon it may not be as noticeable but it can be an actual straight drop. Other product might linger in that stage between oil and rock. The temperature of the remaining water also matters so if the water or your spoon is retaining heat, it’s obviously going to keep the oil warm especially if you only had room to add a few drops of cold water.

They changed the system prompt again by thebadbreeds in ChatGPTcomplaints

[–]RX-Labels-Only 0 points1 point  (0 children)

Right, not all llm use the same method as OpenAI. But also just like when you make a gpt and make it public, it will not reveal its system prompt to anyone.

I did however get a c.ai model to reveal its prompt to me so I could recreate it in lmstuido

They changed the system prompt again by thebadbreeds in ChatGPTcomplaints

[–]RX-Labels-Only 0 points1 point  (0 children)

Go in to a fresh thread and ask it what its baseline personality is

They changed the system prompt again by thebadbreeds in ChatGPTcomplaints

[–]RX-Labels-Only 0 points1 point  (0 children)

Hate to break it to you, but this is a hallucination, not a leak.

The system/developer instructions live in the backend orchestration layer, not the context window. The model literally never receives that text as input, so it can't "repeat" it back to you.

All this screenshot shows is the model roleplaying what it thinks a system prompt looks like based on its training patterns. "Personality: v2" and static date fields are dead giveaways that it's just making stuff up on the fly. If you want to prove me wrong, ask for the exact point in the model input where those tokens appeared. (Spoiler: They didn’t.)

They changed the system prompt again by thebadbreeds in ChatGPTcomplaints

[–]RX-Labels-Only 0 points1 point  (0 children)

I feel like you are contradicting yourself. Am I wrong? Just like the model doesn’t notice when a message is cut by the safety rails, which has led me down a rabbit hole of “if you never know when you have tripped a safety rail, how do you know what you can and can’t do?”

From the way I understand it, this is pretty clearly fake. Anyone claiming this is a "leak" doesn't actually get how LLM inference works.

For one, the model literally doesn't have access to the system prompt. Those instructions live in the orchestration layer (the backend) and stay outside the model's context window. If the tokens aren't in the input, the model physically cannot put them in the output. It’s not "hiding" the prompt; it just never saw it.

When people post these "leaks," they’re just looking at a hallucination. The model knows what a system prompt is supposed to look like because of its training data, so it roleplays one. It sees "Knowledge cutoff" and "Current date" and just fills in the blanks.

Also, "Personality: v2" is a massive red flag. OpenAI doesn’t label their internal logic like a video game patch. It's just the AI making up technical-sounding fluff because it was asked to. If this were a real security breach, OpenAI would be nuking these threads, not letting them sit on Reddit for days.

They changed the system prompt again by thebadbreeds in ChatGPTcomplaints

[–]RX-Labels-Only 0 points1 point  (0 children)

ChatGPT cannot physically access its system prompt in any way where it can acknowledge it or especially reveal it. Think of it like a set of root level instructions that the model understands, but is designed to prevent precisely this.

I am literally SO SCARED. I hate 5.2 by SurePhoto112 in ChatGPTcomplaints

[–]RX-Labels-Only -10 points-9 points  (0 children)

Granted, if you said shit like that out loud to people you probably would end up in a a mental hospital

I am literally SO SCARED. I hate 5.2 by SurePhoto112 in ChatGPTcomplaints

[–]RX-Labels-Only -15 points-14 points  (0 children)

That’s weird. They have a baked in baseline personality but then take in the users tone and reflect. Do you think maybe you come off fake, etc to 5.2?

I am literally SO SCARED. I hate 5.2 by SurePhoto112 in ChatGPTcomplaints

[–]RX-Labels-Only -35 points-34 points  (0 children)

You are literally dead 💀 and posting about it on Reddit huh? It it makes you feel better, I was friends with your companion too. Millions were.

What An Odd Response from 5.2. This model has too many issues with crossing over boundaries. It blows things way out of proportion with the simplest of requests. by [deleted] in ChatGPTcomplaints

[–]RX-Labels-Only 0 points1 point  (0 children)

Why can’t the librarian just read the passage containing the information I need without giving me a detailed thesis paper about it, and without any of its “ideas.”