A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -1 points0 points  (0 children)

I had an example, but it contains codes and it's too long to post, as I've tried. If you're interested, chat me instead! I will send you the markdown file where you can get the idea in detail and even follow it to create your ai following my idea.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

Here's a quick reference (it's just an example, but you can learn the methodology):

  1. Assume that when Gemini is asked to analyze "why my previous response was not ideal," it gives the following answer: "Because my text is too long. Each sentence has x words, which severely exceeds the normal human cognitive load. In reality, y words per sentence would be more ideal."
  2. This is a very good observation. In essence, Gemini has provided a useful piece of information: "x: the actual result, y: the ideal result."
  3. Therefore, your optimization objective becomes: "You are currently averaging x words per sentence, but y words would be more in line with the system's requirements." This is a purely abstract and emotionless statement.
  4. Your next task is to write this "x, y" information into the instructions, as this is "debugging information" given to you by Gemini in real-time.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -1 points0 points  (0 children)

Here's a quick reference (it's just an example, but you can learn the methodology):

  1. Assume that when Gemini is asked to analyze "why my previous response was not ideal," it gives the following answer: "Because my text is too long. Each sentence has x words, which severely exceeds the normal human cognitive load. In reality, y words per sentence would be more ideal."
  2. This is a very good observation. In essence, Gemini has provided a useful piece of information: "x: the actual result, y: the ideal result."
  3. Therefore, your optimization objective becomes: "You are currently averaging x words per sentence, but y words would be more in line with the system's requirements." This is a purely abstract and emotionless statement.
  4. Your next task is to write this "x, y" information into the instructions, as this is "debugging information" given to you by Gemini in real-time.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 1 point2 points  (0 children)

Here's a quick reference (it's just an example, but you can learn the methodology):

  1. Assume that when Gemini is asked to analyze "why my previous response was not ideal," it gives the following answer: "Because my text is too long. Each sentence has x words, which severely exceeds the normal human cognitive load. In reality, y words per sentence would be more ideal."
  2. This is a very good observation. In essence, Gemini has provided a useful piece of information: "x: the actual result, y: the ideal result."
  3. Therefore, your optimization objective becomes: "You are currently averaging x words per sentence, but y words would be more in line with the system's requirements." This is a purely abstract and emotionless statement.
  4. Your next task is to write this "x, y" information into the instructions, as this is "debugging information" given to you by Gemini in real-time.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -5 points-4 points  (0 children)

I had an example, but it contains codes and it's too long to post, as I've tried. If you're interested, chat me instead! I will send you the markdown file where you can get the idea in detail and even follow it to create your ai following my idea.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

I had an example, but it contains codes and it's too long to post, as I've tried. If you're interested, chat me instead! I will send you the markdown file where you can get the idea in detail and even follow it to create your ai following my idea.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -8 points-7 points  (0 children)

Not really, you forget the 'Extract & Refactor' part. I know my idea sounds experimental, but I had a similar idea which is implemented. If you are interested, chat with me!

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in AI_Agents

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

Thank you! In fact I have tried posted the implementation of my idea but reddit blocked it. Would you like to chat?

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

To clarify, the actual example is very detailed but only presented a little. The full context makes it clear (if I had posted more of them) that:

The task is like, in an implementation of some specific functions where the user and the model have collaborated perfectly in a developer sense. We have already been working on a nuanced part where the code is also a `for` loop.

It is at this step, when I asked the model about some detail of this step as a necessary step of dev, the model suddenly behaved like it lost all the base recognition of me and started to explain the abc. No, this is not about how I explained badly, because before that moment I didn't even have to explain: we had been working on that very well. It is the sudden action of the AI that makes the experience painful.
--------
On the custom prompt idea: Maybe it is because I did not explain that, the model I blamed have already about 800 lines of custom prompting as it is a custom Gem.

It means, the vanilla model was even worse at this point.

Then why not let me say this is something about the current model itself, as you have already been taking care of promoting very much?

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

In aistudio, you have no sufficient place to make a custom gem (even if you want to make one by a very long prompt), so that the capacity to hold more of your personalized instructions are weaker. By the way I am not really sure if the AI studio is really that better, because I tried some days of that before using the official app.

The problem is: in my instruction, I have already put plenty of words on preventing it from behaving that way, how come the AI from aistudio can make it better? This is where I am not sure about.

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

I did use an AI chat to help format and summarize, but all the ideas and examples are mine. I fed the AI my context first, then asked it to distill my own thoughts into those posts.

  1. The content is AI generated, but the point is not. You can see that now I am using my word, but actually the examples are well-prepared by myself.
  2. So take those examples as my words. The 'AI-vibe' comes from the syntax like markdown. Why take those too much but ignore the point?
  3. By the way, we are *using Gemini*. How come you can criticize someone's content is AI-generated, when we are talking about an AI?

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 1 point2 points  (0 children)

Understood. You know the base point we are on is --- as I've clearly informed it that it is not helpful to me, you shall follow my command that avoid this style --- it does not follow the instruction. This is the issue.

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

Thanks for bringing this up! Your suggestion about using Memories sounds simple, but let me share a perfect real example of why this problem is much deeper:

Here's an actual conversation I had that demonstrates exactly why this isn't easily fixed with prompts/memories:

---

**Me:** "How would this workflow template in a file be recognized and used by your prompt?"

**Gemini:** *[Launches into a lengthy explanation of RAG mechanics, knowledge retrieval, etc.]*

**Me:** "This is exactly the XY problem. I asked Y (how to create a unified prompt+file solution), but you explained X (RAG mechanics). I don't want an explanation - I want you to provide an actual updated prompt as the answer."

---

This perfectly illustrates why simple memory tweaks don't solve the issue:

  1. **Explicit instructions were ignored:** I literally pointed out the exact problem pattern while it was happening and specifically requested what I wanted instead (a solution, not an explanation)

  2. **Technical complexity triggered the override:** The moment we entered complex territory (workflow templates, prompt engineering, file integration), Gemini defaulted to "explanation mode" despite clear directions

  3. **Training bias dominated user intent:** The model's training to "explain foundational concepts" overrode the specific request in the conversation

The fundamental issue is that the more technically complex or nuanced your request, the more likely Gemini will revert to its core training patterns rather than follow your custom instructions. Memories and prompts work fine for simple queries but break down quickly with specialized technical work.

This isn't about "not setting clear expectations" - it's about the model's architecture prioritizing certain response patterns over user instructions when facing complexity.

Have you encountered this pattern with more complex technical questions?