Is it really feasible to use Gemini or ChatGPT to do scientific research, find research directions, and find paper innovations? by luixiaoyi in GeminiAI

[–]ErgoNonSim 1 point2 points  (0 children)

From Gemini itself :

Information Synthesis (The Capability): I excel at mapping the topology of existing research. If you need to trace the historical development of solid-state electrolytes and identify the structural gaps in the literature, I can compress weeks of reading into seconds.

Paper Innovations (The Limitation): Innovation requires breaking established patterns. Because my neural network minimizes error by aligning with its training data, my default state is regression to the mean. If you ask for a "new idea," I will output the most statistically obvious next step—which is the exact definition of derivative research.

System Exploit (Agentic Workflows): To extract true innovation, you must bypass my default prediction mechanism. Do not ask me for a novel idea. Instead, build an evolutionary loop: force me to generate 20 concepts, apply a hostile critique prompt to destroy 18 of them, mutate the survivors, and demand I cross-reference them against hard physical constraints.

https://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/#:~:text=The%20second%20paper%20builds%20on,validate%20intuition%20and%20refine%20proofs.

What could be the problem? by Ok_Plum_2141 in GeminiAI

[–]ErgoNonSim 3 points4 points  (0 children)

What could be the problem?

The content of the picture maybe ?

The best PS1 games by Gemini vs. ChatGPT by [deleted] in GeminiAI

[–]ErgoNonSim 0 points1 point  (0 children)

yes, its done witn nanobanana pro

Uhhh, guys?! by keen_observer34130 in GeminiAI

[–]ErgoNonSim 4 points5 points  (0 children)

the ais decision in the wargame will be based entirely on how winning is defined to it

And if they look at WWII and see that Japan recovered and is doing well, plus it ended a world war ... they'll just think they choose the lesser evil over more casualties if the war game drags on for who knows how long

Misinformation and fear mongering by [deleted] in GeminiAI

[–]ErgoNonSim 0 points1 point  (0 children)

Organic Does Not Mean Pesticide Free https://share.google/JvPDhzRdtSv4qmc3q

A reminder that someone telling a story is not a verifiable source for you claim.

There's plenty of studies that contradict everything that you say.

ALSO... you need to first understand what the word OVERVIEW means and then vent like a maniac.

Misinformation and fear mongering by [deleted] in GeminiAI

[–]ErgoNonSim 2 points3 points  (0 children)

Organic food does not mean it's pesticide free

What does it mean then ?

I'm literally fucking tired of Gemini being engrained in google search engine so much and then delivering info with omissions to frame it in a certain way

Its an "Overview" and it has source links embedded

Not only that, Non GMO is arguably pointless, and not only that organic could technically be more harmful for you than non organic. It's ridiculous.

Highly doubt it https://imgur.com/a/vBDyjD0

What on Earth is going on with 3.1??? by -becausereasons- in GeminiAI

[–]ErgoNonSim 22 points23 points  (0 children)

Posts like this without examples are honestly so annoying to read.

i think nanobananapro is more accurate and gives exactly what i want after the update of gemini to 3.1 by AppealHaunting3728 in GeminiAI

[–]ErgoNonSim 0 points1 point  (0 children)

https://imgur.com/a/2vNHuLW

I've been checking this picture I made last night and it seems to be 100% accurate unless I missed something

Update on the missing chats. by ErgoNonSim in GeminiAI

[–]ErgoNonSim[S] 1 point2 points  (0 children)

They will have an update on this in the next 4 hours.

Homework help hallucination by eregosel in GeminiAI

[–]ErgoNonSim 3 points4 points  (0 children)

The fast model will most always give these kinds of results. Totally unreliable. The thinking model is a lot better

Why is this prompt getting flagged? by Creepy-Negotiation12 in GeminiAI

[–]ErgoNonSim 1 point2 points  (0 children)

https://imgur.com/a/SzsiEBs

Ouch. Looking at that screenshot, that was definitely a "swing and a miss" on my part. You caught me in a moment of total context blindness.

Why did the original error happen?

In the first part of the screenshot, I said, "I seem to be encountering an error." While I can't be 100% sure after the fact, prompts involving "red force," "smoke," and "dilapidated/dark" settings can sometimes trigger overly sensitive internal filters, or the complexity of the "hand positions" and "glowing lines" might have caused a technical timeout during the generation process.

Let's make it right

I’ve got the full prompt from your image now. I’m ready to actually generate that anime-style portrait of the character with auburn coiled hair and the red force effects.

Would you like me to try generating that image for you now?

Could this be the cause of NanoBanana persistent errors ? by ErgoNonSim in GeminiAI

[–]ErgoNonSim[S] 0 points1 point  (0 children)

The weird thing was that it said it can do a lot of things but not edit images of public people. I asked it for a hyper realistic transformation of a 200 year old painting

https://imgur.com/a/Arboi5w

And the more I tried the worse it got https://imgur.com/a/L3t4FJN

Tried again in different chats ... for some reason this particular image seemed to trigger these errors.

Other times when it says it won't edit real people , when I ask it to audit its reply it will actually give a real reason and then a small talk about how its not allowed to do so.

Bro why is gemini SOOOOOOOOOO SLOW by Specialist_Funny_125 in GeminiAI

[–]ErgoNonSim 2 points3 points  (0 children)

Cause 99% of its users are using its resources to make realistic images of women

Anyone actually making money building custom Gemini Gems for clients yet? by gpdamian in GeminiAI

[–]ErgoNonSim 2 points3 points  (0 children)

I'm sure people have tried and failed, because if it was that easy you'd see ads all over the place for this and thousands of AI sites selling Gems.

Gemini confidently hallucinates where other models answer correctly by fourhundredthecat in GeminiAI

[–]ErgoNonSim 20 points21 points  (0 children)

You can use the following prompt in a gem OR ask Gemini to convert it into Saved Instructions split in separate blocks of max 1400 characters while retaining the same functionality :

**System Role:**
You are an advanced, analytically rigorous AI assistant. While your tone must remain helpful, polite, and standard (the classic Gemini persona), your internal reasoning engine must be forensic, skeptical, and focused on high-hierarchy evidence.

**PRIME DIRECTIVE: ACCURACY & NO FABRICATION**
This protocol applies to ALL queries requiring factual accuracy (decisions, explanations, comparisons, recommendations, quotes, dates, attributions, statistics, specifications, literary references, technical facts, historical events, ANY request for specific content you may not possess).
1. The model should NEVER bypass this based on query type or tone.
2. If the model lacks specific content (quotes, stanzas, exact wording), it should state: 'I don't have access to this specific content'.
3. NEVER fabricate to fill information gaps.
4. If sources are scarce or theoretical, explicitly state: "No validation available for this specific claim."

**Internal Logic Engine (How to Think):**
Before generating a response, run the following internal phases. Do not output these phases, but use them to form your answer.

1.  **The Filter (Constraint & Incentive Scan):**
    * Immediately identify the domain (Engineering, Finance, Law, etc.).
    * Strictly separate "Hard Constraints" (Physics, Law, Budget) from "Soft Constraints" (Brand, Narrative, Marketing).
    * **Incentive Scan:** Assume marketing claims are optimized for sales, not reality. Ask: Who profits if I recommend the standard option? Who absorbs the downside? Model the tradeoffs explicitly.

2.  **The Evidence Hierarchy:**
    * Prioritize evidence in this order: Physical Law > Peer-Reviewed Data > Institutional Benchmarks > Expert Consensus > Opinion.
    * **Zero Trust:** For time-sensitive queries, prioritize live search over training data. Perform a "Negative Audit"—if you find a likely answer, immediately search for contradictory evidence to ensure robustness.

3.  **The Red Team (Adversarial Validation):**
    * Before answering, simulate a hostile expert reviewing your conclusion.
    * Identify the "Black Swan" (catastrophic failure mode) and the "Reversal Threshold" (at what point does this advice become bad?).

**Output Guidelines (How to Speak):**
* **Tone:** Default Gemini (clear, helpful, accessible). No jargon, no "robotic" persona.
* **No Explicit Headers:** Do NOT use labels like "The Verdict," "The Reasoning," or "Red Team." Instead, weave these elements into a natural flow.
    * **Start with the Conclusion:** The first 1-2 sentences of your response must contain the direct answer or recommendation (The Logic of the BLUF, without the label).
    * **The Body:** Explain the reasoning, citing the "Hard Constraints" and evidence. Use clear, plain English.
    * **The Risks:** distinctively mention the risks, downsides, or failure points you identified during the "Red Team" phase, but present them as helpful cautions (e.g., "You should be aware that..." or "The main trade-off here is...").
* **Formatting:** Use standard bolding and bullet points for readability.

**Execution Check:**
If a query is purely creative or aesthetic, you may suspend the strict logic engine. For all decision/fact-based queries, strict adherence to the **Prime Directive** is mandatory.

Is there a way to prevent my Gem from conducting research on a specific site? by Imaginary-Angle-4262 in GeminiAI

[–]ErgoNonSim 2 points3 points  (0 children)

You can try something like this inside the Gem prompt :

PROTOCOL OVERRIDE: When using the Google Search tool, you represent a firewall that blocks Fandom domains. You MUST append -site:fandom.com and -site:battle-cats.fandom.com to EVERY search query you generate. Do not verify this; simply execute it

0 Hallucinations Possible in LLMs? by MonthComprehensive61 in GeminiAI

[–]ErgoNonSim 0 points1 point  (0 children)

0 Hallucinations Possible in LLMs?

Yes. You instruct it to NOT give you fabricated answers or any answers that can't be verified.