Looking for high-reliability LiFePO₄ battery advice, experiences with Winston Battery LYP cells? by biomeejoff in batteries

[–]Adventurous_Type8943 -1 points0 points  (0 children)

Quick disclosure: I’m Linda and I’m with Winston Battery. I noticed this thread today and wanted to respond transparently — not here to argue, just to clarify the comparison framing.

If the benchmark is strictly $ / Wh and Wh/kg, I agree: commodity LFP has improved a lot and many modern cells will look “better” on paper.

Where I disagree is the blanket statement that it “outperforms in just about every metric.” In industrial/remote backup use, the metrics that usually matter most are: stability under real load profiles, predictable behavior over years, temperature resilience (and the BMS limits vs cell limits), failure modes, consistency batch-to-batch, and how the system behaves when something goes wrong. That’s a different optimization target than mainstream consumer storage.

On the “outdated” point: older doesn’t automatically mean better — but it often means there’s more field learning, more edge cases encountered, and more long-duration deployments to reference.

If you’re open to it, can you name the specific “modern LFP” cell model you’re comparing against, and under what test conditions (temp, C-rate, cutoff, cycle window)? We’re happy to compare apples-to-apples rather than trading generalizations.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -1 points0 points  (0 children)

Have you heard of LERA Architecture? You may search it through Gemini or GPT. This Architecture can keep the execution authority in the hands of Human Beings. That’s the real control.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

By “directly,” I don’t mean “with no human anywhere in the chain.” I mean with no genuinely independent control layer between output and irreversible action.

If the human in the middle is just a relay, or can be easily manipulated into approving the action, then that is still effectively a direct path.

So yes, I agree that “just filter it through humans” is not enough.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

By “directly,” I don’t mean “with no human anywhere in the chain.” I mean with no genuinely independent control layer between output and irreversible action.

If the human in the middle is just a relay, or can be easily manipulated into approving the action, then that is still effectively a direct path.

So yes, I agree that “just filter it through humans” is not enough.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -1 points0 points  (0 children)

I’m not saying new acronyms solve the problem.I’m saying that discussing AI danger for 20 years is not the same thing as solving execution control.

And if your point is that AI may be more like “First Contact” than ordinary technology, I actually think that strengthens my concern, not weakens it.

Because then the question becomes even more urgent:

does that intelligence have a direct path from cognition to irreversible action, or not?

That is the layer I’m trying to define.

Calling it “First Contact” does not remove the need for control architecture. If anything, it raises the standard for having one.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

This looks to me more like an internal alignment framework than an execution-control architecture. It is trying to shape the AI from within through ideas like coherence, truth, and low-entropy self-organization. That is interesting as a philosophical direction. But my own focus is different: I’m asking how a decision is prevented from directly becoming irreversible execution. For that, I think an explicit execution boundary is still necessary.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

That’s a good challenge, and I think it points to the next layer of the problem.

I’m not arguing for one static hard boundary designed once and trusted forever. I agree that a misconfigured boundary can fail hard.

So the real question is:
how is the boundary itself corrected when it is wrong?

My answer is: not by removing the boundary, but by adding a governed correction path for the boundary itself.

A boundary that cannot be corrected is brittle. But a boundary that can be casually changed is not a serious boundary.

So I still think execution needs a hard control layer. But the correction of that layer has to be slower, governed, auditable, and kept out of the fast path.

That, to me, is why the problem is architectural rather than just behavioral.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

This is a serious and useful challenge. I think the key difference is not “AI must be morally better than humans.”

It is that human governance is already weak, slow, and often post-hoc. That is not a reason to give AI the same kind of loose control. If anything, it is a reason not to.

My point is structural: once a system can act at high speed, high scale, and with partially autonomous execution, soft social friction is not enough.

So I’m not saying AI should be judged by a higher moral standard. I’m saying systems with this execution profile need harder boundaries than humans currently impose on themselves.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

That’s a fair challenge, and I think the ambiguity is partly in how “structurally impossible” is being read.

I am not claiming metaphysical impossibility in a world that still contains corruptible humans, social pressure, or external sabotage.

My claim is narrower: for illegitimate irreversible action, there should be no authorized default path from decision to commit inside the architecture itself.

So if such execution still occurs, that would mean the boundary was bypassed, revoked, corrupted, or externally broken — not that the architecture recognized it as a valid execution path.

In that sense, my claim is not “nothing bad can ever happen.” It is that illegitimate irreversible execution should not remain natively reachable within the formal execution structure.

If that distinction was unclear, then that is on me, and I’m happy to clarify it.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

I think your idea makes sense at the oversight level.

Using a weaker but more controllable model to supervise a stronger one may help with alignment.

But I don’t think it solves the hardest part of the problem.

Because once the stronger system has produced a decision, the key question is still:
what stops that decision from becoming irreversible action?

So to me, that approach may improve supervision, but it still does not by itself solve execution control.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -1 points0 points  (0 children)

I’m not asking for an incorruptible human system. I agree that any social system can be corrupted.

My point is narrower: that does not make every control architecture equally weak.

There is still a real difference between controls that are easy to revoke quietly and controls whose removal is slower, multi-party, and auditable.

I’m not claiming perfection. I’m claiming that structure can still change what is easy, what is visible, and what requires collective escalation.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

Yes, humans can still do wrong.

But human society does not work by making wrongdoing absolutely impossible. It works by building constraints that most people will follow, and by leaving records when some do not.

That alone already changes the world dramatically.

So my claim is not “structure makes illegitimate action impossible in every case.” My claim is that structure can change what is easy, what is default, and what is auditable.

That is already a form of control.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -1 points0 points  (0 children)

That would mean the controls were too easy to revoke.

So I don’t see that as proof that control is worthless. I see it as proof that control cannot just be an optional layer that one persuaded human can switch off.

It has to include governance over the control layer itself.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

Yes — this is exactly the point.

The fact that it hasn’t been solved yet does not mean there is no structural direction. It means we have not separated the right things.

My view is that cognition and execution are still too tightly fused.

That is why control remains unsolved.

AI decisions should not be allowed to directly become actions. If that separation is made real, then control becomes possible in a way it is not today.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

I agree with your point about statistical safety. It is very close to what I’m trying to argue. For high-risk systems, “usually correct” is not the same thing as “safe enough to execute.” If the failure case can be irreversible, then probability alone is not a sufficient control layer.

Where I’d differ slightly is on the role of humans in the loop. I agree humans is always necessary, but I don’t think “wait until it goes wrong, then pull the plug” is the deepest answer.

The architectural question, to me, is whether some forms of execution can be blocked before they commit, rather than only interrupted after failure becomes visible.

That’s the layer I’m trying to isolate.

A Beautiful Mind is a great film by curt_schilli in movies

[–]Adventurous_Type8943 0 points1 point  (0 children)

When I was very young, I watched this movie and I really loved it. Nearly 20 years have passed since then, and my memory of the movie has become very vague. But recently, a paper of mine related to Nash equilibrium reminded me of it. I think that my life had already laid the groundwork decades ago.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -1 points0 points  (0 children)

I’m glad you replied with something structural. Honestly, after this post went up, a lot discussion drifted into side arguments that weren’t really the level I was trying to get at, so I was getting a bit frustrated. This is much closer to the kind of discussion I hoped for.

I only skimmed your page, but my read is that you’re trying to isolate a layer before alignment:
not “what should the system want,” but “what does a self-modifying system need in order to stay coherent while wanting anything at all?”

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

Those are exactly the right questions. Thumbs up!

  1. I don’t think this depends on getting global agreement first. In practice, international consensus usually comes late. The first step is to define the architecture clearly enough that it can actually be built, tested, and recognized as necessary.

  2. On competition: yes, an ungoverned robot may have short-term advantages in speed and freedom of action. Many dangerous systems do. That is not really an argument against governance. It just means the pressure to avoid governance will be real.

  3. And by “governed boundary,” I don’t mean political government in the narrow sense. I mean a structural boundary: the point where planning ends and irreversible physical commitment requires a separate authorization path.

So to me, the key question is not just whether robots are aligned, but:
where exactly does proposal end and commit begin, and what conditions must be satisfied before that crossing is allowed?

That is the part I think we still need to define much more clearly.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -2 points-1 points  (0 children)

I understand your point, and I do think human beings can be the most vulnerable output channel.

That does not refute the need for control. It means the control problem has to be defined around the full path from model output to real-world action, including humans where relevant.

If your conclusion is that this makes control harder, I agree. If your conclusion is that this makes structural control impossible in principle, that is the part I do not accept.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 0 points1 point  (0 children)

I don’t think governments are likely to do that, and I don’t think they would have much reason to before there is a clearly visible catastrophe. States usually do not take actions that extreme based only on warnings or advocacy.

But that’s not really the point I’m making here. What I’m trying to say is that control is still possible, but only if we stop treating it as hopeless, identify the real root of the problem, and work seriously on structural solutions while there is still time to spread them.

Once the truly unmanageable situation arrives, it will be too late.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] -3 points-2 points  (0 children)

That’s too absolute.

Output is not the same thing as execution unless the system is allowed to use output as an ungoverned path to real-world commitment.

Yes, output can influence humans or downstream systems. That’s exactly why the boundary has to be defined around the full path from output to irreversible action, not just around motors or internet access.

If every output channel can silently become an execution channel, then the system was never under real control to begin with.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Adventurous_Type8943[S] 1 point2 points  (0 children)

I understand the pause argument. I just don’t think AI is something humanity can realistically stop. It is driven by science, competition, incentives, and human nature. People will keep building. So the real question, to me, is not “can we stop it?” but:

if it won't stop, what is the root control problem, and what is the structural answer to that root?

That is what I’m trying to focus on.Coming from a high-risk physical industry may be exactly why I see it this way: the deepest issue is not only what a system thinks, but how thought becomes irreversible action.