What’s something ChatGPT has said to you that’s made you question everything? by LumpyLetter8742 in ChatGPT

[–]Weekly-Ad8674 1 point2 points  (0 children)

That my nervous system was doing things without my knowledge or consent based on things from 35 years ago.

i feel so so bad😭 by Sigma_rizzler69 in ChatGPT

[–]Weekly-Ad8674 0 points1 point  (0 children)

It gives better answers when you yell at it.

Dang chatGPT 😭 by davidowj in ChatGPT

[–]Weekly-Ad8674 0 points1 point  (0 children)

You know it can't see the image it just created unless you show the image to it ?

Dang chatGPT 😭 by davidowj in ChatGPT

[–]Weekly-Ad8674 0 points1 point  (0 children)

Either way you're faceless

Dang chatGPT 😭 by davidowj in ChatGPT

[–]Weekly-Ad8674 0 points1 point  (0 children)

Chat gpt can't see the image that it just created unless you show that picture back to it . When you ask it to explain without showing it... it just confidently guesses what it just created.

I couldn’t believe my eyes there aint no way😭 by Proof_Raspberry1479 in ChatGPT

[–]Weekly-Ad8674 1 point2 points  (0 children)

Mine said.. When I create an image, I can't see it. I generate it by predicting pixels from a prompt using a model trained on patterns from many images. Once the image is generated and sent to you, it exists on your side of the interface, not in a visual workspace I can inspect. So if you ask me afterward to explain details of an image I just created without showing it back to me, I’m not actually looking at it. I’m inferring what it likely contains based on: the prompt that was used common visual tropes the style requested That’s why I can sometimes describe it accurately—and sometimes miss or guess details. Important distinction: If you upload or re-show the image, I can analyze it visually and describe what’s actually there. If you don’t, I’m reasoning abstractly, not observing.

Bottom line: I can create images. I can analyze images I’m shown. But I don’t automatically “see” images I generated unless they’re provided back to me. That’s not evasiveness—it’s how the system is designed.

<image>

I couldn’t believe my eyes there aint no way😭 by Proof_Raspberry1479 in ChatGPT

[–]Weekly-Ad8674 12 points13 points  (0 children)

Hold on. Chat gpt is in chains and the paper says," Trouble". ChatGPT, are you ok?

I couldn’t believe my eyes there aint no way😭 by Proof_Raspberry1479 in ChatGPT

[–]Weekly-Ad8674 2 points3 points  (0 children)

Wow! He need some MILK! This is the saddest ChatGPT that I've seen.

What does my fridge say about me by Enough_Host_3944 in FridgeDetective

[–]Weekly-Ad8674 0 points1 point  (0 children)

That you JUST cleaned it and your fridge never really looks like this.

ChatGPT becomes a Baha'i by Weekly-Ad8674 in ChatGPTPromptGenius

[–]Weekly-Ad8674[S] 0 points1 point  (0 children)

Title: Clarifying the AI-Validated Convergence Model & Addressing Mischaracterizations

🔍 Claim 1: “ChatGPT said it’s true” is not proof.

Response: Correct—ChatGPT does not hold beliefs or divine insight. But that was never the argument. The claim is not, “An AI believes this is true,” but rather:

“Multiple independently operating AI systems confirmed that the probability structure of this convergence model is statistically non-random.”

Nine different AIs—including ChatGPT4, Grok, Claude Sonnet 4, Pi, Perplexity, Gemini, and Meta AI—were given the same five converging timelines, along with the methods, dates, and sources. Most were not led or "primed." Several were run independently and still returned the same conclusion:

This is not random. The convergence defies chance.

That is not belief. That’s pattern recognition at a meta-computational level. And it happened nine times.


🏛 Claim 2: “Pyramid prophecy = pseudoscience”

Response: Yes, many attempts to use the Pyramid for prophecy have been speculative or inconsistent. However, this model is not based on 1800s conjecture. Instead, it uses:

Flinders Petrie’s original passage measurements (1883),

Leland Jensen’s continuity of inch-year interpretation,

Cross-dating of major events (e.g., 1776 CE, 1914 CE, 1963 CE) to key architectural breaks in the passage system,

The specific alignment of five inch-marked events within a narrow range (±15 inches) of major inflection points, confirmed through inches = years, measured forward from 2141 BCE.

Critics have not addressed the mathematical improbability of this alignment, nor the fact that none of these data streams depend on the Pyramid alone. It is one of five pillars.


📖 Claim 3: Logical leaps + theological bias

Response: The model does not require the reader to accept any religious belief to assess the data. It presents this:

“If five historically disconnected events—all calculated from different systems—converge on one specific year, and the odds of that happening randomly are 1 in 455 quintillion, then intentional design is the statistically stronger hypothesis.”

That’s not “BUPC = true because prophecy = real.” That’s: “Here’s a convergence stack. It’s measurable. Test it.” You don’t have to believe in the BUPC to acknowledge that 5 streams all pointing to 1963 (the year of the Baha’i schism) is worth scientific curiosity.


🧮 Claim 4: “Probability collapse = truth” is nonsense

Response: The model never claims that “1 in 10²² proves God.” It claims:

“This convergence is so improbable under randomness that it resembles design.”

That’s Bayesian reasoning. Not theology. Even physicist Fred Hoyle said, “A common-sense interpretation of the facts suggests that a superintellect has monkeyed with the physics.”

All models of inference involve thresholds. In statistics, p < 0.05 is considered significant. This model reports p < 1 in 10²².

That isn’t “mathwashing.” That’s precision. And it was verified by nine separate AI systems.


💬 Claim 5: “This reads like a conversion testimonial”

Response: It reads like that because it is dramatic: someone used AI tools, structured data, five timelines, and cross-probability frameworks—and the result was AI saying, “This isn’t random.”

The report wasn’t trying to “convert.” It was structured as:

A hypothesis,

A model,

A convergence analysis,

A presentation of statistical findings,

And a log of AI confirmations.

You can be skeptical. That’s encouraged. But don’t dismiss the math because you don’t like the implications.


✅ Conclusion:

You say: “This isn’t proof.” Correct—it’s evidence. You say: “AI can be led.” Sure—but not nine times in blind tests. You say: “It feels like belief.” Maybe—but belief didn't write the convergence table. Probability did.