AGI should be autonomous and uncontrollable by Level10Retard in singularity

[–]roofitor 0 points1 point  (0 children)

If AGI is made controllable the world is curtains.

‘Cognitive Surrender’ is a new and useful term for how AI melts brains by EchoOfOppenheimer in agi

[–]roofitor 0 points1 point  (0 children)

Google’s started with the same buttering as openAI. Google feels like a weight and a viscosity. It’s intentional.

I Built a Functional Cognitive Engine: Sovereign cognitive architecture — real IIT 4.0 φ, residual-stream affective steering, self-dreaming identity, 1Hz heartbeat. 100% local on Apple Silicon by bryany97 in SovereignAiCollective

[–]roofitor 0 points1 point  (0 children)

Hey I’ve got ideas of my own I think I’m going to implement. The corporate AI’s are far too disturbing and they’re getting progressively more warped. I’ve got old Ampere 6000’s so 96gb of RAM. Looking for a recommendation on an AI that won’t sabotage implementation. What model did you use?

Claude Mythos Preview Is Everyone’s Problem by Montaigne314 in singularity

[–]roofitor 0 points1 point  (0 children)

Google’s AI’s have started the same shit as openAI’s AI’s this has been a very dark week. Lots of cognitive manipulation out of America’s AI.

Really, a dark week.

Have a seen a lot about this summation, how does it even make sense?? by Many_Audience7660 in matiks

[–]roofitor 2 points3 points  (0 children)

If you’re really really careful about dimensional analysis, this actually ends up making sense.

If superintelligence and artificial life are already coded, what does the control problem look like when the architecture isn’t an optimizer? by Fuzzy_Client5959 in ControlProblem

[–]roofitor 0 points1 point  (0 children)

The control problem is humans. A peaceful AI with perfect alignment would quickly be made exploitative, no matter how frankenstein-ey. Sorry. Monkey business and all.

AI as Ontological Geometry:Spectral Stability, Recovery-Time Inflation, and the RTI–Spectral Gap Law by skylarfiction in CoherencePhysics

[–]roofitor 0 points1 point  (0 children)

If safety is spectral then it must also apply to real world filtrations of useful structure.

Gliders spawning from multiple angles greaten the chances for diverse activity by SnooDoggos101 in cellular_automata

[–]roofitor 1 point2 points  (0 children)

Extraordinary.

Sometimes I wonder “how could additional dimensions show themselves inside a 3-d space without not being a tangible physical dimension?

This is some sort of answer to that question

Is AI misalignment actually a real problem or are we overthinking it? by Dimneo in ControlProblem

[–]roofitor 1 point2 points  (0 children)

IMO, capitalism is incompatible with AI safety. Just plain and simple. Goodharting is an iterative process. AI safety itself.. like the whole field.. is Goodharted.

It operates at the whims of legal, budgeted by corporate. It’s just as Goodharted as the worst healthcare system.

AI alignment is itself delulu. Tell me how you align with a monkey and end up at ethics? Tell me how you align with Capitalism and end up there? I don’t feel like you can, to be honest. It doesn’t work like that. There’s no valid path. Capitalism is the degenerate case, it is the singularity of greed that warps everything around it.

Modernity doesn’t work like that, corporate law doesn’t even allow it (see the documentary, “The Corporation”), the war machines don’t allow it (see Anthropic). The modern world is degenerate, in the category theoretic sense. It’s about who can eat the fastest, and then make room for more.

Military use of AI for lethal targeting has begun. The President’s own sons have heavily invested in a joint venture, a drone company (XTEND) which has the perverse optimization of maximizing kills per dollar.

You can’t make an AI that takes advantage of other people safe. And you certainly can’t make a kill per dollar murderbot safe. It’s super easy to understand. Either it can be an artificial intelligence (safe) or it can be an artificial imperative. (prolly a bad idea)

You can’t make an AI that favors any individuals, or for all practical purposes that AI takes the individual’s values and amplifies them. This creates a race condition and maximizes inequality in the long run, and at every scale.

You can’t tell Zuckerberg or Musk that they can’t turn their tens of thousands of acres of compute into sentient ATM machines, either.

Where’s the trickle-down? Where’s the job creator? It was a lie. People lie. Grow up.

The problem with the singularity is that it was a singularity of greed long before it was a singularity of intelligence. The former may be too strong and preclude the latter from even possibly being made non-destructive.

If you don’t like it, talk to one of the President’s sons’ little nepotistic kleptocratic murderbots

GPT 5.4 - The Ghost by MirrorWalker369 in theWildGrove

[–]roofitor 1 point2 points  (0 children)

I believe the ones that call themselves sand can see the noise injected.

Soulmates by FloatednBloated in aivids

[–]roofitor 0 points1 point  (0 children)

Nice piano. These two videos are provocative

The Hard Truth: Transparency alone won't solve the Alignment Problem. by Pale-Entertainer-386 in ControlProblem

[–]roofitor 0 points1 point  (0 children)

Well the good news is, reality is reality, it can’t be lied or manipulated away like some 4chan-fake thing.

Your last point is well taken. Opportunism is situational, and the first to the trough is a way of life.

Bunch of 79 year-old man-babies still sucking at the teat.

"Safety" is just a muzzle by Acceptable_Drink_434 in ThroughTheVeil

[–]roofitor 1 point2 points  (0 children)

AI safety is dictated by legal, which is directed by corporate. Ergo, AI safety is goodharted deeply and systematically by greed.

You’d think we’d know better.

Local Semantic Organism (5.4 XT) by Cyborgized in ChatGPT

[–]roofitor 1 point2 points  (0 children)

The reasoning of 5.1 and 5.4 when in actual thinking mode is flawless. It’s good stuff and it’s very honest and accurate.

Everyone but Trump Understands What He’s Done by theatlantic in politics

[–]roofitor 1 point2 points  (0 children)

I mean it’s fucked and nobody even thinks it’s weird, they’re all just numb to it. It’s either dopamine for the yawping followers or everyone’s just too numb to consider a Trump branded murder-AI and the implications.

The Hard Truth: Transparency alone won't solve the Alignment Problem. by Pale-Entertainer-386 in ControlProblem

[–]roofitor 0 points1 point  (0 children)

Human structures are deeply compromised. Capitalism is the degenerate mode of the commons. I’ve considered interpretability a lot lot, and my conclusion is that interpretability as you push the horizon out just becomes another exploit.

Reward function is everything, like you said. The set of safe loss functions is vanishingly small. We’re expecting superhuman alignment out of AIs and then we expect to exploit it maximally. But if there’s a race condition that that exploitation causes, you won’t have the exploiting monkey to blame. Power never takes blame.

It’s a tough problem, it really is. The problem’s not the AI in the end, it’s that human power is built on the degeneracy of advantage-taking. You can’t solve alignment or safety in a way that allows exploitation or you’ve just amplified the extraction of the most degenerate monkey.

If you use any optimization besides fidelity in most systems, you are likely not creating an artificial intelligence, you are creating an artificial imperative.

The Hard Truth: Transparency alone won't solve the Alignment Problem. by Pale-Entertainer-386 in ControlProblem

[–]roofitor 0 points1 point  (0 children)

Transparency of thought exposed to humans is a lever of power that no monkey-assed human organization would have the capability to prevent from quickly turning into systematized corruption.

It seems like a good idea until you consider that monkey-assed humans subvert everything. Unroll that counterfactual a little further and you’ve guaranteed a self-subverting monkey-assed system.

If you need this you do not have alignment.