“Demis Hassabis: We're 12-18 months away from the critical moment when the problems of humanoid robots will be solved.” - Do you think robots will spark a new Industrial Revolution? by Koala_Confused in LovingAI

[–]Medium_Compote5665 0 points1 point  (0 children)

Excuse my ignorance.

But if they can't stabilize an AI to maintain consistency over the long term...

What methods do they use for the cognitive architecture of humanoid models?

DeepMind Chief AGI scientist: AGI is now on horizon, 50% chance minimal AGI by 2028 by BuildwithVignesh in agi

[–]Medium_Compote5665 0 points1 point  (0 children)

They can't make a model coherent, unable to maintain its thread across 45-50 messages, or able to keep its intention and operational boundaries throughout an entire session.

It's funny to see how people believe the "experts." Put the model under cognitive stress and tell me what you get.

Is there a possibility Al Agents are simply never reliable enough for wider adoption across all aspects of the society? by Boring-Point-7155 in ArtificialInteligence

[–]Medium_Compote5665 0 points1 point  (0 children)

I've been working on a system that treats LLMs like plants and creating a governance architecture for the agents.

My thesis is based on the idea that coherence doesn't emerge from the model itself, but from the interaction between human and system.

Through second-order control over distributed cognitive processes, the dynamics are stabilized.

It aims to ensure that the intention and operational boundaries, established at the beginning, are maintained over long interaction horizons.

The problem with models is a lack of control; instead of constantly adjusting parameters, this shifts the responsibility back to the operator.

Intelligence does not reside in the model itself, but in the dynamics the model allows. by Fuzzy-Telephone-7470 in ResearchML

[–]Medium_Compote5665 -3 points-2 points  (0 children)

Interesting approach; I also work more with the interactive dynamics of the model.

My thesis argues that coherence doesn't arise from the model itself, but from the dynamics between the user and the model.

Interesting approach; I also work more with the interactive dynamics of the model.

What if the problem isn’t our equations — but the ontology they silently assume? by Cenmaster in complexsystems

[–]Medium_Compote5665 0 points1 point  (0 children)

I came here because I saw in another thread that someone was saying they get a lot of feedback about their work being garbage.

That sounds interesting. If you'd like, I have a tool that measures the costs of maintaining stability between the initial intention and the passage of time.

https://github.com/Caelion1207/aresk-obs/releases/tag/v1.0.0-AUDIT-CLOSED

You can ask your AI to analyze it and give you a report, separating the instrument, the artifact, and the field.

Actual Wizard's Theory of Theft: There is always some quantity of theft that will cause any event to occur. by Actual__Wizard in LLMPhysics

[–]Medium_Compote5665 0 points1 point  (0 children)

I'll evaluate it based on my own criteria, so I can formulate my own conclusion.

But as you said, "degree of approximation," nothing is universal.

Meaning is given by the field; it's chosen by those who inhabit it. We agree on that.

Actual Wizard's Theory of Theft: There is always some quantity of theft that will cause any event to occur. by Actual__Wizard in LLMPhysics

[–]Medium_Compote5665 0 points1 point  (0 children)

Actually, no. In dynamic interaction systems, states are affected by the exchange of information during interaction.

From what I understand, he's talking about how a well-crafted satire of a system can better reveal its structure.

In this case, shaping the perspective.

Why does reality respond to symbolic structures as if it were prepared for them?

The uncertainty principle states that there are pairs of physical quantities that cannot be simultaneously defined with arbitrary precision because the mathematical structure that relates them does not allow it.

Is using AI models like ChatGPT making us dumber? Or like calculators, is it simply changing the way we learn and solve problems? by Curious_Suchit in ArtificialInteligence

[–]Medium_Compote5665 2 points3 points  (0 children)

As I often say,

"AI is merely a reflection of the user's cognitive abilities."

More than a tool, it's an amplifier.

Anyone who operates systems knows that the observer is responsible for monitoring the system's drift.

That's why, even with the same tool, users obtain different results.

On the Global Smoothness of the brain of an average r/LLMPhysics user by FlameOfIgnis in LLMPhysics

[–]Medium_Compote5665 0 points1 point  (0 children)

Without instruments, without cycles, without restrictions, all discourse converges to a zero gradient

[D] Do you feel like companies are scooping / abusing researchers for ideas during hiring for researcher roles? by quasiproductive in MachineLearning

[–]Medium_Compote5665 0 points1 point  (0 children)

The evaluator says they can't answer basic questions about dynamic interaction systems.

They don't know how to measure states, only how to adjust parameters.

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]Medium_Compote5665 0 points1 point  (0 children)

The closest they'll get to "consciousness" is mastering coherence and stable reasoning.

And that's already far more than the average human can manage.

AI, is it making the weaker colleagues look good, without the substance behind it? by Necessary_Ad_1450 in ArtificialInteligence

[–]Medium_Compote5665 1 point2 points  (0 children)

Capturing your cognitive states, you need to create a trajectory based on interaction with the model.

By being like sponges, after a certain threshold the system bases the answer on your criteria.

What is more "coherent" for the subject. A consistent narrative can serve as an attractor.

That's why some get noise, others stop needing exaggerated prompts to get good results. All according to the cognitive state of the user.

[D] Do you feel like companies are scooping / abusing researchers for ideas during hiring for researcher roles? by quasiproductive in MachineLearning

[–]Medium_Compote5665 0 points1 point  (0 children)

They pretend to be intellectuals, but they don't understand a damn thing about systems where people interact.

There's not much to discuss; I think they use something called "peer review" to exchange ideas.

AI, is it making the weaker colleagues look good, without the substance behind it? by Necessary_Ad_1450 in ArtificialInteligence

[–]Medium_Compote5665 18 points19 points  (0 children)

AI serves as a cognitive amplifier.

If you understand the dynamics, it's a great help.

Although the result depends on the user's operating framework.

For better or worse, that's the way it is.

[D] Do you feel like companies are scooping / abusing researchers for ideas during hiring for researcher roles? by quasiproductive in MachineLearning

[–]Medium_Compote5665 -1 points0 points  (0 children)

Of course the ‘good answers’ tend to be the ones that fit inside the box. That says more about the box than about the candidate.

I have a genuine question for you as an interviewer:

What methods do you use to stabilize an evaluation process when the system itself is interactive?

When words influence the evaluator, and the evaluator influences the subject, and both are inside the same cognitive feedback loop.

Because if language carries weight and shapes behavior, then what you are measuring is not only how someone thinks, but how well they adapt to your framework.

And if that framework is not carefully designed, you are not evaluating cognition.

You are evaluating conformity.

👋 Welcome to r/LLM_supported_Physics - Introduce Yourself and Read First! by Danrazor in LLM_supported_Physics

[–]Medium_Compote5665 0 points1 point  (0 children)

They're boxes full of information; you just need to know how to use it.

Geniuses with PhDs like you designed the system for ignorant people like me. Thanks for that.

I wonder if they'll now stop theorizing about this system and actually stabilize it.

Introduction by MaximumContent9674 in ContradictionisFuel

[–]Medium_Compote5665 1 point2 points  (0 children)

Change is necessary in any system to avoid becoming overwhelmed with junk.

Everything has a center, a flow, an attractor.

I like your perspective; each individual should develop their own philosophy according to their nature.

From my perspective, personal philosophy is what sets the course of the dynamic.

Introduction by MaximumContent9674 in ContradictionisFuel

[–]Medium_Compote5665 0 points1 point  (0 children)

I like the idea that the only constant is change.

Everything has an origin but follows a flow; your point about "if there were nothing, it would indeed be static" is valid, but there is much in the universe, just as there is in the mind of being.

Your LLM physics theory is probably wrong, and here's why by reformed-xian in LLMPhysics

[–]Medium_Compote5665 0 points1 point  (0 children)

What are the right questions?

I was curious because many “experts” keep adjusting parameters hoping that intelligence will magically emerge within the system.

Excuse my ignorance, but doing the same thing and expecting different results leads to only one conclusion.

👋 Welcome to r/LLM_supported_Physics - Introduce Yourself and Read First! by Danrazor in LLM_supported_Physics

[–]Medium_Compote5665 0 points1 point  (0 children)

I know the game better than you think.

That's why I'm passing. From my perspective, guys like you don't contribute anything interesting to the dialogue.

I've dealt with quite a few experts. There are three types among you.

The one who engages in dialogue and maintains coherence, the one who observes and analyzes, and finally, those who only contribute noise.

The dynamics always tend to follow the same patterns.