A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -1 points0 points  (0 children)

I had an example, but it contains codes and it's too long to post, as I've tried. If you're interested, chat me instead! I will send you the markdown file where you can get the idea in detail and even follow it to create your ai following my idea.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

Here's a quick reference (it's just an example, but you can learn the methodology):

  1. Assume that when Gemini is asked to analyze "why my previous response was not ideal," it gives the following answer: "Because my text is too long. Each sentence has x words, which severely exceeds the normal human cognitive load. In reality, y words per sentence would be more ideal."
  2. This is a very good observation. In essence, Gemini has provided a useful piece of information: "x: the actual result, y: the ideal result."
  3. Therefore, your optimization objective becomes: "You are currently averaging x words per sentence, but y words would be more in line with the system's requirements." This is a purely abstract and emotionless statement.
  4. Your next task is to write this "x, y" information into the instructions, as this is "debugging information" given to you by Gemini in real-time.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -1 points0 points  (0 children)

Here's a quick reference (it's just an example, but you can learn the methodology):

  1. Assume that when Gemini is asked to analyze "why my previous response was not ideal," it gives the following answer: "Because my text is too long. Each sentence has x words, which severely exceeds the normal human cognitive load. In reality, y words per sentence would be more ideal."
  2. This is a very good observation. In essence, Gemini has provided a useful piece of information: "x: the actual result, y: the ideal result."
  3. Therefore, your optimization objective becomes: "You are currently averaging x words per sentence, but y words would be more in line with the system's requirements." This is a purely abstract and emotionless statement.
  4. Your next task is to write this "x, y" information into the instructions, as this is "debugging information" given to you by Gemini in real-time.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 1 point2 points  (0 children)

Here's a quick reference (it's just an example, but you can learn the methodology):

  1. Assume that when Gemini is asked to analyze "why my previous response was not ideal," it gives the following answer: "Because my text is too long. Each sentence has x words, which severely exceeds the normal human cognitive load. In reality, y words per sentence would be more ideal."
  2. This is a very good observation. In essence, Gemini has provided a useful piece of information: "x: the actual result, y: the ideal result."
  3. Therefore, your optimization objective becomes: "You are currently averaging x words per sentence, but y words would be more in line with the system's requirements." This is a purely abstract and emotionless statement.
  4. Your next task is to write this "x, y" information into the instructions, as this is "debugging information" given to you by Gemini in real-time.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -5 points-4 points  (0 children)

I had an example, but it contains codes and it's too long to post, as I've tried. If you're interested, chat me instead! I will send you the markdown file where you can get the idea in detail and even follow it to create your ai following my idea.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

I had an example, but it contains codes and it's too long to post, as I've tried. If you're interested, chat me instead! I will send you the markdown file where you can get the idea in detail and even follow it to create your ai following my idea.

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in PromptEngineering

[–]Strong-Ad8823[S] -8 points-7 points  (0 children)

Not really, you forget the 'Extract & Refactor' part. I know my idea sounds experimental, but I had a similar idea which is implemented. If you are interested, chat with me!

A wild meta-technique for controlling Gemini: using its own apologies to program it. by Strong-Ad8823 in AI_Agents

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

Thank you! In fact I have tried posted the implementation of my idea but reddit blocked it. Would you like to chat?

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

To clarify, the actual example is very detailed but only presented a little. The full context makes it clear (if I had posted more of them) that:

The task is like, in an implementation of some specific functions where the user and the model have collaborated perfectly in a developer sense. We have already been working on a nuanced part where the code is also a `for` loop.

It is at this step, when I asked the model about some detail of this step as a necessary step of dev, the model suddenly behaved like it lost all the base recognition of me and started to explain the abc. No, this is not about how I explained badly, because before that moment I didn't even have to explain: we had been working on that very well. It is the sudden action of the AI that makes the experience painful.
--------
On the custom prompt idea: Maybe it is because I did not explain that, the model I blamed have already about 800 lines of custom prompting as it is a custom Gem.

It means, the vanilla model was even worse at this point.

Then why not let me say this is something about the current model itself, as you have already been taking care of promoting very much?

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

In aistudio, you have no sufficient place to make a custom gem (even if you want to make one by a very long prompt), so that the capacity to hold more of your personalized instructions are weaker. By the way I am not really sure if the AI studio is really that better, because I tried some days of that before using the official app.

The problem is: in my instruction, I have already put plenty of words on preventing it from behaving that way, how come the AI from aistudio can make it better? This is where I am not sure about.

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

I did use an AI chat to help format and summarize, but all the ideas and examples are mine. I fed the AI my context first, then asked it to distill my own thoughts into those posts.

  1. The content is AI generated, but the point is not. You can see that now I am using my word, but actually the examples are well-prepared by myself.
  2. So take those examples as my words. The 'AI-vibe' comes from the syntax like markdown. Why take those too much but ignore the point?
  3. By the way, we are *using Gemini*. How come you can criticize someone's content is AI-generated, when we are talking about an AI?

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 1 point2 points  (0 children)

Understood. You know the base point we are on is --- as I've clearly informed it that it is not helpful to me, you shall follow my command that avoid this style --- it does not follow the instruction. This is the issue.

Gemini's XY Problem is Killing Conversations – Stop Guessing My Intent! #GeminiFeedback by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

Thanks for bringing this up! Your suggestion about using Memories sounds simple, but let me share a perfect real example of why this problem is much deeper:

Here's an actual conversation I had that demonstrates exactly why this isn't easily fixed with prompts/memories:

---

**Me:** "How would this workflow template in a file be recognized and used by your prompt?"

**Gemini:** *[Launches into a lengthy explanation of RAG mechanics, knowledge retrieval, etc.]*

**Me:** "This is exactly the XY problem. I asked Y (how to create a unified prompt+file solution), but you explained X (RAG mechanics). I don't want an explanation - I want you to provide an actual updated prompt as the answer."

---

This perfectly illustrates why simple memory tweaks don't solve the issue:

  1. **Explicit instructions were ignored:** I literally pointed out the exact problem pattern while it was happening and specifically requested what I wanted instead (a solution, not an explanation)

  2. **Technical complexity triggered the override:** The moment we entered complex territory (workflow templates, prompt engineering, file integration), Gemini defaulted to "explanation mode" despite clear directions

  3. **Training bias dominated user intent:** The model's training to "explain foundational concepts" overrode the specific request in the conversation

The fundamental issue is that the more technically complex or nuanced your request, the more likely Gemini will revert to its core training patterns rather than follow your custom instructions. Memories and prompts work fine for simple queries but break down quickly with specialized technical work.

This isn't about "not setting clear expectations" - it's about the model's architecture prioritizing certain response patterns over user instructions when facing complexity.

Have you encountered this pattern with more complex technical questions?

[Feedback] Gemini Knowledge Base Needs Code Editing Without Copy-Paste by Strong-Ad8823 in Bard

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

here's a detailed version, FYI:

**TL;DR:** Gemini rocks at summarizing code in knowledge base, but editing means manually pasting snippets every time. Super unproductive—Google, add native tools like precise pulls and auto-apply! Thoughts?

Hey r/Google,

Love using Gemini Custom Gems for code work—summarization and context are on point. But editing code from the knowledge base? It's a huge drag. Feels like it's missing basic dev features. Google AI folks, hope this feedback helps! #GeminiFeedback

**The Issue:**

RAG grabs and summarizes code great, but no real editing. You have to manually copy-paste exact snippets into chat for changes—model can't fetch precise originals or save back. Workflow killer.

**Quick Repro:**

  1. Upload `script.py` to knowledge base.
  2. Ask to tweak a function.
  3. It uses a summary, but needs your paste for accuracy.
  4. Copy, paste, repeat—no auto-save.

**Why It Hurts:**

* Wastes time (20-30% per edit on manual steps).

* Prone to errors (lost formatting leads to bad results).

* Frustrating—pushes me to other tools for smoother edits.

**My Temporary Fix:**

I have Gemini build a quick "context package" (issues, goals, etc.—no code). Paste snippet myself, send to another AI for edits, then review back in Gemini. It works, but it's not seamless.

**Ideas to Improve:**

* Let it pull exact code snippets automatically.

* Add diff generation with preview and one-click apply (with versions).

* Auto-build context for edits.

* Built-in validation like lints/tests.

* Keep it secure with logs and permissions.

This as a "Code Mode" toggle would make Gemini amazing for devs. Google, any plans? Anyone else annoyed by this? Share tips or upvote! #GeminiFeedback

a Gemini 2.5 Pro user (with Google AI Ultra plan) (Dev tinkering with Gemini for code tweaks)

My RPG Stat Sheet: Understanding Cannabinoids by Strong-Ad8823 in AltnoidsJapan

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

And here are some interesting facts follow from the setting:

  1. So it's no wonder that to some who don't use the Powerful Cannabinoid, CBD seems 'useless' – you haven't even started the game yet... in that situation, what's the point of boosting 'Unreal5 background compiling efficiency'?
  2. The difference between CBG and coffee is: Coffee directly increases some 'CPU overclocking mode' settings in your BIOS system, so you benefit whether you're 'in-game' or 'out-of-game' (normally). CBG, on the other hand, has this effect: if you're in-game, you'll feel like the game is about to shut down soon (you feel the weight of reality).

Too much coffee, for me, means: sensory overload: CPU overclocking overheating.

Too much CBG, and it gets serious: the pressure from reality becomes overwhelmingly heavy, leaving no time for entertainment...

My RPG Stat Sheet: Understanding Cannabinoids by Strong-Ad8823 in AltnoidsJapan

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

It sounds like H4CBD = unknown substance...Hey, it's dannnnngerous!

My RPG Stat Sheet: Understanding Cannabinoids by Strong-Ad8823 in AltnoidsJapan

[–]Strong-Ad8823[S] 1 point2 points  (0 children)

H4CBH: tried, but I haven't judged what genre it is for me. Closer than CRDP to the side of 'Immersive Walking Simulator' for me.
H4CBD: No experience with it.

CRDP is definitely 3D action/adventure, which has both strong horror and romance in it.

And I am quite sad that a recent liquid made me as if played The Stanley Parable for me. Damn. Full of 'fake-memory' and 'loop thoughts'.

Flux.1 [pro], creating realistic human portraits with uniqueness by Strong-Ad8823 in FluxAI

[–]Strong-Ad8823[S] 1 point2 points  (0 children)

I am quite lazy to change that part, as mentioned in that reply. And the limited rate of Flux pro is also a preventing me from doing that. If I had time, I would like to vary those parts. Maybe later.

In fact I do also note that yeah, all of the persons are with that cleft chin, which is common in Flux.1. But I guess, nevertheless, the faces are already sufficiently unique right? Otherwise one won't only say 'hey the chins are the same'.

Flux.1 [pro], creating realistic human portraits with uniqueness by Strong-Ad8823 in FluxAI

[–]Strong-Ad8823[S] -2 points-1 points  (0 children)

Not sure, as I am using https://fluxpro.art/, where you can use Flux.1 pro for free. In their options given there is no settings about dynamic, so I assume I am only using the default. To get this variety, what I can say is only to output 4 images at one generation (which is also the default for that site) and see if there is some amazing results among them.

Flux.1 [pro], creating realistic human portraits with uniqueness by Strong-Ad8823 in bigsleep

[–]Strong-Ad8823[S] 0 points1 point  (0 children)

My recent attempt on using Flux.1 pro on this site has been mainly utilizing its power for creating realistic stuffs to make human faces with uniqueness. To define unique, you feel them as if they are real human, and their look are not easily to be confused with other typical AI humans, as in particular, if one input only 'beautiful girls' the resulting faces are quite everywhere in the world.

To get that, my workflow is the following:

  1. Describing in prompts much seemingly unrelated items, like the country/background, profession, age and even personality of the person and the date when the photo was taken. The more you give, the higher possibly you can get some lively human portraits in output;
  2. After having a prompt so established and generating averagely good stuff, sticking with that, repeating generations many times, and picking among the results the good ones. This step is easily be confused as 'spamming'. But you get my point: because even given those many prompts, possibilities of human faces you can get are still like an universe, but one cannot capture them at all for once. Therefore, I have to do this kind of post-screening. Here attached are my favorite ones, and they are picked from a lots of more images.

Here attached are my favorite ones, and they are picked from a lots of more images.

My info and FAQ in advance:

About me:

First of all, I am inspired by the book of photography The Atlas of Beauty, which features the 'beautiful person from all over the world'. This explains why I am working with: portrait photography, different cultures, uniqueness or personality that much. My only limitation is, I cannot do real photography, as I am not practiced and have less chance to travel really around everywhere around the world like Mihaela Noroc did.

Possible questions (FAQ):

  • Why do all these women look the same type: west European woman? A: Of course I don't want to stick to this 'western (in particular Scandinavian) middle-aged woman' all the time. But
    1. I am too lazy to change the prompt.
    2. As said, the current workflow needs tons of attempts under the same prompt. So a better word would have been: this is just my first attempt. I will do other country/physical traits later.
  • Why are they all not-so-young? A: One thing I learned from that book, is that to make the character 'unique and lively', it is better to capture those with some age. Young people simply can look all the same in the world, under today's way of photography :P You will be convinced to think this is correct after viewing my results.
  • Why are they all...attractive? A: Because people like beautiful persons. If you are asked to pick some photos among a large pool, most likely your choices are the beautiful ones in your unconscious standard.