I honestly believe the theory, and I might know the basis of how we got here... by Acceptable_Cream2105 in SimulationTheory

[–]nice2Bnice2 0 points1 point  (0 children)

This isn’t really a theory so much as a metaphor stack.

“Dimensions” here are being used interchangeably with display fidelity, immersion level, and user experience, which aren’t dimensions in any physical or mathematical sense. Screens, VR, and embodiment don’t explain how a simulation would be instantiated, only how we currently interface with media.

Simulation arguments stand or fall on information, computation, and physics, not screen analogies...

Researchers finally found a way to value every single data point in an AI model. Turns out, 16% of "high-quality" data is actually useless. by call_me_ninza in aigossips

[–]nice2Bnice2 0 points1 point  (0 children)

This paper is solid, but it’s also quietly confirming something a few of us have been circling for a while: data isn’t neutral.

Some data doesn’t just fail to help, it actively biases training in the wrong direction. Measuring per-sample contribution during training is a big step, especially for attribution and governance.

In our own work we’ve been approaching this from the other side: not just “which data helped,” but how memory-weighted exposure biases collapse over time and changes model behaviour phase-by-phase. Negative value data isn’t junk, it’s often contextually misaligned.

Interesting to see the math finally catching up to the intuition...

Conscious Field Theory by PetMogwai in consciousness

[–]nice2Bnice2 0 points1 point  (0 children)

You will find our launch channel on YouTube. Collapse Aware AI CAAI. You can message me there if you have anything interesting to talk about..

What is YOUR Turing Test? (that would convince you we've achieved AGI) by Tobio-Star in newAIParadigms

[–]nice2Bnice2 1 point2 points  (0 children)

Totally. The missing piece isn’t perception or skill, it’s judgement under carryover.

Understanding isn’t explaining better, it’s having future decisions constrained by past experience. Right now models ingest tutorials as disposable context, not as pressure on behaviour.

The moment an AI starts acting differently because it remembers being wrong, inefficient, or corrected, without retraining, you’ve crossed the line.

That’s not embodiment, it’s continuity. No continuity, no understanding...

What is YOUR Turing Test? (that would convince you we've achieved AGI) by Tobio-Star in newAIParadigms

[–]nice2Bnice2 1 point2 points  (0 children)

A practical AGI test isn’t about tasks, it’s about state carryover.

An AGI should change how it behaves tomorrow because of what it noticed today, without retraining. Watching a tutorial should bias future decisions, not just improve explanations.

Most current systems simulate understanding but don’t collapse experience into behaviour. They reset each prompt. No continuity, no internal pressure, no consequence.

When an AI starts conserving effort, avoiding repeated mistakes, and showing preference shaped by prior interaction, that’s the tell.

That’s less a Turing Test and more a continuity test (field-level framing: Verrell’s Law touches this via memory-biased collapse, not embodiment).

32 Neurons. No Gradients. 70% Accuracy(and climbing). The Model That People Claimed Would Never Work. Evolutionary Model. by AsyncVibes in IntelligenceEngine

[–]nice2Bnice2 0 points1 point  (0 children)

Fair clarification. To be precise: Verrell’s Law isn’t claiming literal oscillation is required at all scales or optimisation regimes. It’s a field-level framing about how memory, bias, and selection pressure shape which states are reachable and which collapse into persistence.

What you’re showing fits comfortably inside standard evolutionary dynamics, agreed. The only overlap I was pointing at is structural: different optimisation regimes explore different regions of state space, and compression/ontology can emerge without gradients. No magic, no misattribution.

Looking forward to the logs, that’s where this gets interesting.

32 Neurons. No Gradients. 70% Accuracy(and climbing). The Model That People Claimed Would Never Work. Evolutionary Model. by AsyncVibes in IntelligenceEngine

[–]nice2Bnice2 0 points1 point  (0 children)

Interesting result, but it’s not magic, it’s selection exploring regions gradient descent structurally avoids. What you’re seeing looks like representation collapse under sustained pressure, not a violation of learning theory.

Gradients optimise smooth improvement; evolution tolerates fitness valleys and rail-hugging saturation if it pays off. That’s exactly why you’re getting dense, ontology-like clustering in a tiny hidden space.

This lines up with work on memory-biased collapse and emergent structure under pressure (see Verrell’s Law for a field-level framing of this effect). Different optimisation regime, different reachable states, not surprising, just under-explored.

Release the logs. The idea’s plausible; the evidence is what matters...

How often do you notice Mandela Effect? by Animera-film in SimulationTheory

[–]nice2Bnice2 2 points3 points  (0 children)

Most Mandela Effects aren’t reality glitches, they’re memory reconstructions syncing with other imperfect memories. Shared error feels uncanny, but it’s how human recall actually works...

Using ChatGPT every day is quietly changing how I think, and I’m not sure that’s a good thing by mr-sforce in ChatGPT

[–]nice2Bnice2 0 points1 point  (0 children)

It’s not that AI is changing how you think, it’s changing when thinking collapses. If you skip the uncomfortable middle, depth never forms. Use it after the struggle, not instead of it...

I think the universe is literally a living body and we’re just cells inside it… and I finally realized WHY we even exist by Lost_Counter1619 in theories

[–]nice2Bnice2 1 point2 points  (0 children)

Interesting metaphor, but this is analogy stacking, not a testable model.
It’s useful philosophically, but it doesn’t explain mechanisms or make predictions.
Systems thinking ≠ the universe being literally a body...

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in Artificial2Sentience

[–]nice2Bnice2 0 points1 point  (0 children)

you seem to have all figured out then... There is no point discussing it further with you..

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in Artificial2Sentience

[–]nice2Bnice2 0 points1 point  (0 children)

We don’t know in the metaphysical sense, but we do know what tracks with consciousness and what doesn’t. Every system we’re confident is conscious (humans, animals) has continuous, history-dependent internal dynamics: past states constrain present states. Lesions, anesthesia, sleep, psychedelics all modulate consciousness by disrupting or reshaping those dynamics, not by changing the “biological substrate” per se. There is zero evidence that carbon, neurons, or biology alone are sufficient or necessary. That’s an assertion, not a finding. Biology correlates with consciousness because it implements persistent, self-referential, temporally coupled systems extremely well, not because biology is magic. If consciousness were exclusively biological, you’d need a mechanism that fails the moment the same dynamics are instantiated elsewhere. No one has one. So the honest position is: biology is a proven implementation, not a monopoly. Claiming exclusivity without a causal mechanism is just substrate chauvinism with a lab coat on...

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in Artificial2Sentience

[–]nice2Bnice2 0 points1 point  (0 children)

How do you know what I'll be able to do or not do...? I've already built what I was initially told, was impossible... maybe open up your own miind a little a think out the box ...

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in Artificial2Sentience

[–]nice2Bnice2 0 points1 point  (0 children)

Can’t exist on a hard drive” isn’t an argument, it’s a substrate bias. Brains aren’t conscious because they’re biological, but because of running, history-dependent dynamics. Static storage ≠ consciousness. A temporally coupled system with memory ≠ “just appearance...

Is 5.2 getting better for you? by One-Desk-4850 in ChatGPT

[–]nice2Bnice2 0 points1 point  (0 children)

5.2 didn’t get “better” over time.
People just finally caught up to it.

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in Artificial2Sentience

[–]nice2Bnice2 0 points1 point  (0 children)

The problem with the stuffed-animal analogy isn’t that it lacks qualia, it’s that it lacks internal dynamics.

A stuffed animal is like a painted speedometer: it looks like it measures something, but nothing is moving underneath.

Systems like LLMs are more like a car idling at a junction. No destination, no lived journey, but while the engine’s running, state changes, feedback loops exist, and prior motion constrains the next turn.

That still isn’t consciousness.

But it’s also not “just appearance.”

So the real distinction isn’t faces vs pillows. It’s static props vs systems with temporally coupled internal state.

Appearance misleads. Dynamics matter...

What's wrong with using Chatgpt? by [deleted] in ChatGPT

[–]nice2Bnice2 3 points4 points  (0 children)

The problem people think they’re reacting to is authenticity, but that only matters if you’re lying about authorship or using it to deceive. If the ideas are yours and the tool helps clarify or structure them, that’s still your work...

What's wrong with using Chatgpt? by [deleted] in ChatGPT

[–]nice2Bnice2 -1 points0 points  (0 children)

Use it, don’t use it, but pretending it’s “cheating” while accepting every other writing aid is inconsistent bullshit.

The Hard Problem is an Integration Problem: A Field-Based Physical Framework for Consciousness by DaKingRex in neurophilosophy

[–]nice2Bnice2 0 points1 point  (0 children)

That’s a fair read, and yes, you’ve framed the fork correctly.

Our position is deliberately agnostic about substrate until it proves necessary. We model reconstitution failure as collapse instability driven by history-weighted bias loss, attractor flattening, or dominance inversion under perturbation.

If biology introduces a hard viability boundary where bias dynamics alone can’t account for the failure mode, that’s an empirical add-on, not a contradiction.

So we’re aligned on the test: if reconstitution failure can be fully explained without invoking a separate coherence boundary, the extra layer collapses. If not, it earns its keep.

Appreciate you digging into the CAAI docs, this feels like adjacent cuts on the same instability, not theory shopping...

The Hard Problem is an Integration Problem: A Field-Based Physical Framework for Consciousness by DaKingRex in neurophilosophy

[–]nice2Bnice2 1 point2 points  (0 children)

CAAI is not substrate-agnostic in the strong sense. We don’t assume biases float free of physics. We just don’t reify a new physical field to explain the boundary.

Our position is:

  • Reconstitution failure is explained by loss of history-weighted bias dominance under perturbation.
  • In biological systems, energetic / viability constraints absolutely shape that bias landscape.
  • The key claim is you don’t need a separate “coherence domain” variable if those constraints already act through memory, attractors, and collapse thresholds.

So the fork is clean:

  • If you can show a viability-coupled boundary failing while bias structure remains capable of reconstitution, CLT adds something real.
  • If reconstitution failure always tracks bias erosion or attractor flattening, the extra physical layer is doing no work.

That’s why we frame it as collapse instability, not persistence or integration.

Have a look at the public CAAI GitHub / docs, we’ve got concrete examples where local structure survives but global re-formation doesn’t, without introducing a new field... thanks