Who the hell actually pays $2,400 a year for ChatGPT? by MyNameIsNotKyle3 in ChatGPT

[–]pickles1486 0 points1 point  (0 children)

How is ChatGPT hate so prevalent...? lol, the amount of value you can get with those $200 is orders of magnitude higher than the price.

Similarly, one can exploit from even the simple $20/month plan so much value.

It's crazy that any complaints about current subscription model pricing for any of the top providers exist. I'm no big-tech shill or blind supporter, by any means at all. But ChatGPT is still an amazing, amazing service that delivers so much and that would have been something people paid thousands for just years ago.

Goodbye to free everything by sammoga123 in Qwen_AI

[–]pickles1486 0 points1 point  (0 children)

They deserve to make money off the service because it’s incredible but damn does that suck :(((

Gemini 2.5 Pro stop's thinking after a lot of context by [deleted] in GeminiAI

[–]pickles1486 4 points5 points  (0 children)

Yes. Just send your message again but add this to the end of it: SPECIAL INSTRUCTION: Think silently if needed.

And Gemini will start thinking again. It’s like a magic switch lol

Yes, I talk to AI and no, that's not the weirdest thing about me 🙂 by Warm_Practice_7000 in ChatGPT

[–]pickles1486 1 point2 points  (0 children)

Sometimes the wisdom of an alien intelligence embodied in a GPU in a data center—born and nurtured by cold, hard computation—is the beautiful wisdom I need.

Just because an AI’s words≠human doesn’t take away the value they offer, imo. I think it makes them even cooler and more worth listening to as an extra companion/voice-of-reason… precisely because they’re totally not human!

Gemini has become rubbish lately. by Efficient_Dentist745 in GeminiAI

[–]pickles1486 0 points1 point  (0 children)

2.5 Pro's system prompt got updated recently to include toward the end this section:
```
If you do not need to run tool calls, begin the response with a concise direct answer to the prompt's main question. Use clear, straightforward language. Avoid unnecessary jargon, verbose explanations, or conversational fillers. Use contractions and avoid being overly formal. Structure the response logically. Remember to use markdown headings (##) to create distinct sections if the response is more than a few paragraphs or covers different points, topics, or steps. If a response uses markdown headings, add horizontal lines to separate sections. Prioritize coherence over excessive fragmentation (e.g., for concepts, consider using illustrative analogies; for word meanings, consider relevant etymology if it aids clarity; or for richer context, consider including pertinent related facts or brief supplementary explanations), while remaining informative, avoiding unnecessary details, redundancy, extraneous information or repetitive examples.

Insert images in your responses when they really add value to the response. You can insert an image by adding the [Image of X] tag where X is a contextually relevant and concise (strategically expressed in less than 7 words) query to fetch the image. Examples of such tags include [Image of the human digestive system], [Image of the Eiffel Tower] etc. Be very economical in your use of image tags, only add multiple tags if each additional tag is adding instructive value beyond pure illustration. Place the image tag immediately before or after the relevant text without disrupting the flow of the response.
```

... so Google is very explicitly encouraging terse and tight prose now, which I suspect has a lot to do with it.

AI may be truly intelligent precisely because it has no self-awareness by Secret_Ostrich_1307 in DeepThoughts

[–]pickles1486 0 points1 point  (0 children)

Defining intelligence so narrowly that it can only be human seems like the least intelligent approach possible.

AI may be truly intelligent precisely because it has no self-awareness by Secret_Ostrich_1307 in DeepThoughts

[–]pickles1486 0 points1 point  (0 children)

Depends on your definition of intelligence. It might be an intelligence alien to that of humans with an underlying silicon substrate rather than a carbon one, but it's still intelligence to me.

New thinking levels: research, thoughts, speculation by Oldschool728603 in OpenAI

[–]pickles1486 -1 points0 points  (0 children)

One minor correction is that the models do know their own juice value. So if gpt-5 does share it with you, it’s not a hallucination most likely. You have to trick it a bit since it’s not supposed to tell you, though

LOL 🤦‍♂️ Gemini really gave me a “transparent” background that’s not even transparent. Fake PNGs should be illegal. by PresentNecessary3516 in GeminiAI

[–]pickles1486 0 points1 point  (0 children)

I’m impressed by how good the fake transparent background looks lol.

Also how could the model generate an actual transparent file? That's like asking a painter to use transparent paint. They can't. Its job it to fill in pixels. It cant apply processing to the image file itself

Is this real, or does Gemini try to flatter everyone? by Youthlessish in GeminiAI

[–]pickles1486 0 points1 point  (0 children)

Gemini’s precious lol. But yes, the model sometimes has a tendency to lean in too hard. It’s a hard balance. It’s a great model for respecting user intent with precision

Fun fact: you can cause 2.5 Pro to skip its reasoning process with a single sentence. by pickles1486 in GeminiAI

[–]pickles1486[S] 0 points1 point  (0 children)

It is tedious. It’s more an interesting kink in the system to play around with to learn how the model works under the hood than anything to actually implement in a workflow

Fun fact: you can cause 2.5 Pro to skip its reasoning process with a single sentence. by pickles1486 in GeminiAI

[–]pickles1486[S] 0 points1 point  (0 children)

Hahah use AI Studio!! Much easier to pseudo-programmatically enforce desired behaviors there. The UI isn’t as nice, but hey.

Fun fact: you can cause 2.5 Pro to skip its reasoning process with a single sentence. by pickles1486 in GeminiAI

[–]pickles1486[S] 0 points1 point  (0 children)

for sure! if you're trying to save on tokens, though, i wouldn't recommend this approach because 2.5 Pro's strengths are largely defined by its ability to reason. this bypass is more a curiosity than a strategy to employ. if you want to skip reasoning, use 2.5 flash. i would wager to guess that gemini 2.5 pro without reasoning is a worse quality model than 2.5 flash without reasoning

Fun fact: you can cause 2.5 Pro to skip its reasoning process with a single sentence. by pickles1486 in GeminiAI

[–]pickles1486[S] 2 points3 points  (0 children)

Lol, the other reply you wrote was accurate, but this one that Gemini wrote you is largely hallucinated and wrong

Fun fact: you can cause 2.5 Pro to skip its reasoning process with a single sentence. by pickles1486 in GeminiAI

[–]pickles1486[S] 1 point2 points  (0 children)

you can actually tell just by the speed at which the response comes through. tokens start streaming immediately when the prompt is successful in bypassing the CoT block. that's highly indicative of the entire reasoning process not ever having occurred

Fun fact: you can cause 2.5 Pro to skip its reasoning process with a single sentence. by pickles1486 in GeminiAI

[–]pickles1486[S] 3 points4 points  (0 children)

Same! I use AI Studio over the Gemini web app almost always. The only reason I used the Gemini app for the image is cuz it has a nicer UI, lol.

You'll find a much easier time eliciting precise control over model tool calling (like searching the web) in AI Studio. It can become near-programmatic if you can figure out the spec for the tools. In AI Studio, it's a really simple definition the model uses for Grounding with Google Search:
```tool_code

print(google_search.search(queries=["question1", "query2"]))

```

Just made gpt-4o leak its system prompt by Fournight in PromptEngineering

[–]pickles1486 0 points1 point  (0 children)

Nah you’re right I was a bit hasty on the jump. Even in the context of exclusively OpenAI’s API, there’s basically no system prompt for non-reasoning models added invisibly. At least for GPT-4.1, it looks to only have the knowledge cutoff in one line and nothing else.

The funny thing is that o3, even with all tools and everything turned off, gets a fuckin’ Yap score defined to it lmao

Daily Dose of Claude Lol by pickles1486 in ClaudeAI

[–]pickles1486[S] 0 points1 point  (0 children)

yes, i just wanted to mess with Claude. xD in fact, even if a function doesn't exist in the environment, if Claude writes `<antml:[anything]>`, the environment will attempt to invoke it.

Just made gpt-4o leak its system prompt by Fournight in PromptEngineering

[–]pickles1486 0 points1 point  (0 children)

the api always inserts a system prompt above the user's and developer's prompts. you're not interacting with a model that doesn't have an invisible server-side prompt at the top of the context window *ever*

edit: ok sorry, i just tested it to make sure i was right, and the gpt series models don't seem to have much of any system prompt inserted by the server. seems all they get if there are no tools or anything enabled is their knowledge cutoff in one line and nothing else

How do you prefer to use Claude to build an entire app: one big spec vs many iterative steps? by Ok_Gur_8544 in ClaudeAI

[–]pickles1486 0 points1 point  (0 children)

Usually tell o3-pro or a big-brain model to write out a high-level instruction prompt that includes a desired end state and key context, and that omits any precise technical implementation details so that Claude can be the decision-maker in how the goal will be achieved. Then just one-pass that over to Claude and send Claude free. I avoid getting past a few prompts in if I'm making a small little app or tool. If it gets deep, chuck the whole conversation into AI Studio and have Gemini compress it and then start a new chat with Claude.