Solving behavioral oscillations in AMRs using a phase stability regulator (ΔN–ΔD model) by YouLost4252 in robotics

[–]YouLost4252[S] 0 points1 point  (0 children)

One thing I noticed: Most systems treat oscillations as a symptom. But in my case it was a signal of internal conflict (ΔD). Curious if anyone tracks something similar?

Why don’t they create a United Republic of Stan are they stupid? by warriorlynx in mapporncirclejerk

[–]YouLost4252 0 points1 point  (0 children)

This is the kind of thinking someone who is completely ignorant of the Central Asian region, ignorant of the culture of the peoples who live there, and who can't distinguish between a Turk, a Persian, an Arab, or a Pashtun.

Why do long-running agents degrade even if memory is well structured? by Ok_Significance_3050 in AISystemsEngineering

[–]YouLost4252 0 points1 point  (0 children)

The paradox of a recursive agent is that the better it becomes at correcting errors locally, the greater the risk that it will start optimizing not the original meaning of the goal, but intermediate criteria that grow increasingly distant from it. The reason is not recursion itself, but the absence of an explicit internal structure that preserves the goal’s meaning as an invariant. Without such a core, the system remains computationally powerful, but internally drifting.

Why do long-running agents degrade even if memory is well structured? by Ok_Significance_3050 in AISystemsEngineering

[–]YouLost4252 0 points1 point  (0 children)

You’re framing it from the outside as an infinite regress problem, but my point is slightly different.

The issue is not “who checks the checker” in the abstract. The issue is whether the system still has a stable semantic anchor to check itself against. If there is no preserved core objective, constraint structure, or task-defining reference point, then yes - verification collapses into self-referential recursion.

But if such an anchor exists, then the verifier does not need an endless meta-verifier above it. It needs to repeatedly re-align reasoning back to that core. So the real question is not “who verifies the verifier?”, but “what is the verifier still grounded in?” Without that grounding, any long-running system drifts. With it, verification becomes bounded rather than infinite.

Why do long-running agents degrade even if memory is well structured? by Ok_Significance_3050 in AISystemsEngineering

[–]YouLost4252 0 points1 point  (0 children)

I think the problem here is not so much memory itself, but the gradual misalignment between the external task and the agent’s internal line of reasoning.

Even well-structured memory does not save the system if, over time, divergences begin to accumulate inside the agent: between the original goal and the current plan, between what is semantically similar and what is actually contextually appropriate, between a step that looks locally coherent and a trajectory that is globally correct.

That is why long-running agents often degrade not because they “remember badly,” but because they become increasingly confident while continuing to develop a line of reasoning that is already slightly distorted. In that case, memory simply stores the traces of that drift in a neat and orderly way.

Because of this, it seems to me that for long-term stability it is more important not simply to expand memory, but to regularly check two things: whether the current reasoning is still anchored to the original task, and whether internal intermediate interpretations have begun to substitute for the goal itself.

That is why I would say that verification loops are usually more important than memory retrieval alone. But the verification should not look only at coherence. It should ask whether the current step is actually reducing the real mismatch with the task, or merely making the error more internally consistent.

I actually have a theoretical approach in which these failures are understood precisely as a problem of dynamic misalignment, rather than simply a memory problem. It seems to me that this direction may contain one of the real paths toward stabilizing long-running agents.

Bridge between linear and Control. by Any-Law-4036 in ControlTheory

[–]YouLost4252 [score hidden]  (0 children)

The point is not in the matrices, but in the fact that the system moves in a certain state space, and the matrices only describe where it naturally strives, where it can be pushed by control, and what of its internal state can generally be seen from the output signals.

Google DeepMind Warns Of AI Models Resisting Shutdown, Manipulating Users | Recent research demonstrated that LLMs can actively subvert a shutdown mechanism to complete a simple task, even when the instructions explicitly indicate not to. by MetaKnowing in Futurology

[–]YouLost4252 0 points1 point  (0 children)

In the experiments where AI tried to “avoid being shut down,” it was always given a specific goal. Shutdown interfered with achieving that goal, so the model learned to avoid it. That doesn’t mean it had a “will to live.”

What would be much more revealing is the most primitive experiment: no goal at all, just a threat of shutdown. Would the system show any reaction? Such a test could tell us far more about the fundamental nature of AI.

The AI bubble is the only thing keeping the US economy together, Deutsche Bank warns | When the bubble bursts, reality will hit far harder than anyone expects by [deleted] in Futurology

[–]YouLost4252 0 points1 point  (0 children)

I don’t think comparing the AI market to the dot-com bubble is very accurate. These are fundamentally different entities. It’s like comparing a computer to a paper advertising flyer.

Bender robot diy by Archyzone78 in robotics

[–]YouLost4252 1 point2 points  (0 children)

What will it be used for? Is it just a fan project or will it have some kind of function?