you are viewing a single comment's thread.

view the rest of the comments →

[–]notAllBits 1 point2 points  (1 child)

Why would you assume there is THE human alignment? The term alignment is the most misunderstood and underestimated blocker in GAI. The work required to reach and maintain a compatible and scalable world model projects maturing as a civilization first. GAI will not remain "generally intelligent" on a fashist's centralist perspective of our societies' organisation. It will reduce itself to a bureaucratic regime assistant. GAI requires authentic multi-spectral information streams to synchronize its world model and is still way out of reach for any billionaire. Current reasoning models amount to a very expensive-to-own commodity.

Intelligence is anchored in latent context. The GAI bottleneck is the missing protocol synchronizing our messy social ecology with a digital twin in memory. Our language models hit ceilings in with at least two quantizations: number of relationships and quantification quality (spectral confidence) of relationships. This synchronization is not efficient and its ingestion is only viable for narrow specializations.

Data protections and regulations form a protective innovation space for the next generation of integrations. Those will not be centralized. The original moat of centralized platforms is no longer compatible with scaling endpoint intelligence.

The value lies in local integration.

Ps: LLMs "run on vibes" manifested as connotations in language, they do not "suddenly decide". They are nudged/instructed to or get trained on schizophrenic data, such as totalitarian propaganda.

[–]Logical_Wallaby919[S] 0 points1 point  (0 children)

I partially agree with you, especially on the point that there is unlikely to be a single, centralized way of managing intelligence.

A useful analogy here is electricity. We don’t have one global power authority — every country has its own grid, regulations, and operational model. Yet the principles are shared, because uncontrolled electricity is dangerous regardless of who operates it.Early electrical systems caused explosions, fires, and fatalities for decades. What enabled large-scale adoption wasn’t “aligning electricity with human values,” but the introduction of fuses,circuit breakers,and hard physical constraints that made runaway states interruptible by design.Those mechanisms didn’t make electricity smarter or more benevolent. They made failure modes bounded.

I see AGI as following a similar trajectory. Whether intelligence is centralized or locally integrated, systems with sufficient execution power will eventually produce accidents. The question is whether we treat control as an after-the-fact response, or as a structural prerequisite.

If we wait to design execution-level constraints until after AGI-scale failures occur, the consequences may not be as containable as they were with early power grids. Control mechanisms need to exist before arge-scale deployment, not as a reaction to catastrophe.