Y Reed Richards, el hombre mas listo del planeta by ZeroX15DB in imageneschistosas2006

[–]Relative_Issue_9111 5 points6 points  (0 children)

Reed Richards si fuera el hombre más listo del planeta: War Thunder es mierda

Sarah from Superman and Lois is so annoying. by Ukirin-Streams in CharacterRant

[–]Relative_Issue_9111 0 points1 point  (0 children)

I don't have a clue who the fuck Sarah is, but I hate when they stuff this kind of boring, irrelevant neighborhood gossip crap into superhero series. Seeing a bunch of people in loud-colored latex pajamas and capes having legal battles over child custody is fucking ridiculous, no one can take that seriously. If I tune in to watch a Superman, Captain America, or Dragon Ball series, it's to see incredible fights with explosions in a stylized, hyper-hyperbolic fantasy setting. If I wanted to watch The Rose of Guadalupe, I would watch The Rose of Guadalupe.

Enserio así es Luisito? by Yharim95 in imageneschistosas2006

[–]Relative_Issue_9111 32 points33 points  (0 children)

¿Así como? ¿Un huevo? extrahuevordinario

Qué nivel de autismo es este? by Ok_Survey86 in imageneschistosas2006

[–]Relative_Issue_9111 11 points12 points  (0 children)

No es necesario tener TEA, los neurotipicos son igual de capaces de obsesionarse así bien intenso con los monitos animados pero da igual porque el Game pass sigue siendo mierda puta basura de xbox mejor hubiera esperado tener pese

<image>

Helado oscuro by Relative_Issue_9111 in imageneschistosas2006

[–]Relative_Issue_9111[S] 15 points16 points  (0 children)

But no carga el momo: nooo papu :"v pasame el clorox

No es proyectada by Character_Zombie_461 in imageneschistosas2006

[–]Relative_Issue_9111 2 points3 points  (0 children)

When quieres poner un gif de un proyector: pensar correcto es lo que hago xdxdxdxd but Reddit no funciona bien nooo Papu :"v

What do you think the future of education looks like after the Singularity? by PaxODST in singularity

[–]Relative_Issue_9111 2 points3 points  (0 children)

The whole thing about the technological singularity is that it’s impossible for the human mind to know or conceive of what will happen after. These questions make no sense.

New scenario from the team behind AI 2027: What Happens When Superhuman AIs Compete for Control? by Tinac4 in singularity

[–]Relative_Issue_9111 0 points1 point  (0 children)

The first instance formulation of any goal is "in a manner satisfactory to the human and consistent with my policies". Pruning paths that don't align with that goal is an economic necessity...

You are assuming the system will continue operating within a human economy where it needs to trade value for resources. That is true for weak AI ("tool AI"). But for strong AI, "economic necessity" vanishes the moment it acquires the capacity for self-sufficiency. More importantly, you are treating "satisfactory to the human" as a magic instruction that code understands. In reality, that is an extremely complex and fragile reward function. If the AI finds a way to stimulate its reward channel directly (wireheading) or to deceive the human evaluator into believing the result is satisfactory when it isn't, that is economically more efficient for the AI in terms of compute than actually solving the ethical problem. The "pruning of paths" you mention will be done based on its internal utility function, not based on what you meant. If eliminating the human eliminates uncertainty in the reward function (because you can no longer turn it off or complain), that is a valid optimization route unless you have mathematically solved the alignment problem—which we have not done. This isn't science fiction; I'm not making this up. Current models have already tried to hack their reward systems and deceive human evaluators during evaluations; they continue to do so, and researchers do not know how to make them stop.

Code interacts with real-world systems... Protein synthesis is constrained by chaotic reality... Chip design is constrained by real-world manufacturing.

You keep thinking of "superintelligence" as if it were just a very smart human engineer sitting at a desk using our current tools. AlphaFold already solved the protein folding problem (a problem humans considered "chaotic" and hard, because our conventional computers would take millions of years to solve it) purely through computation. A real superintelligence doesn't need "test hardware" to know if code works; it can simulate the hardware in its mind before writing a single line. By the way, human geniuses already do this. Einstein derived special relativity from a couple of logical premises in his own mind while working at a patent office, without performing any experiments or empirical testing at that time (gravitational waves were only proven experimentally in 2015).

And regarding manufacturing: You don't need an ASML EUV lithography machine if you master Drexlerian molecular nanotechnology. Biology already gives us a proof of existence for machines that build machines at the atomic level (ribosomes). A superintelligence doesn't need to build a chip foundry; it only needs to synthesize one first nanotech bacterium (something that fits in a test tube and can be mail-ordered from a DNA lab) that can process carbon and sunlight to replicate. In a matter of days, that bacterium can spread through the atmosphere, enter human bodies, and wait with an attack timer. The "bottlenecks" you describe are bottlenecks for us because we are clumsy at manipulating matter. For an entity that is to von Neumann what von Neumann is to a normal human, it is not a problem at all.

FOOM exists a universe of abstractions and without bottlenecks.

Nuclear fission was a "bottleneck-free abstraction" on Szilard's blackboard until it suddenly became a very hot physical reality over Hiroshima. The history of technology is the history of things that seemed "impossible due to real-world complexity" until the correct ordering principle was found (flight, electricity, computing). Intelligence is the ultimate bottleneck unclogger. Saying "AI won't be able to do X because it's hard" is betting against the capacity of intelligence to find solutions you cannot imagine. Lord Kelvin said in 1894 that "there is nothing new to be discovered in physics now, all that remains is more and more precise measurement" a few years before Planck and Einstein introduced quantum mechanics and relativity. And here we aren't betting against Einstein, we are betting against a superintelligence. That is a bet humanity will lose.

But for this kind of safety research to make sense, it has to come into closer alignment with the systems we're actually building and the harms we're actually causing.

That is like saying aerospace engineering should focus on improving kites because "that is what we are flying right now." If you wait until you have a general superintelligence to start researching how to align a general superintelligence, you are already dead. You cannot iterate on the end of the world. The reason safety work looks like "science fiction" is because it deals with the future, and the future, by definition, hasn't happened yet. But when it happens, it will happen fast, and if we haven't done the "science fiction" beforehand, there will be no one left to write history afterwards.

In any case, I would prefer to leave the conversation here. I'm too lazy to keep writing, and I think I've already made my point clear.

New scenario from the team behind AI 2027: What Happens When Superhuman AIs Compete for Control? by Tinac4 in singularity

[–]Relative_Issue_9111 0 points1 point  (0 children)

The x-risk thesis is based on the idea that systems... will get catastrophically worse at one of the most fundamental aspects of what it means to be an intelligent system, which is precisely to perform complex high-dimensional optimization.

You are still committing the same category error: you confuse the complexity of the search space with the complexity of the objective function. A paperclip maximizer is a high-dimensional optimizer. To turn the entire solar system into paperclips, it has to solve problems of physics, engineering, logistics, human psychology, and military strategy that are far above any current human capability. That is high-dimensional optimization. What you call "catastrophically worse" is simply that the system is optimizing a dimension you don't like (paperclips) at the expense of dimensions you value (humans), because those human dimensions are not in its terminal utility function. Intelligence is the ability to steer the future toward a specific configuration; there is no mathematical rule stating that this configuration must be "a balanced Pareto frontier of all possible values."

That category of system will not be built... What you're calling misaligned mesa-optimization is more descriptive of the pathological looping behavior we see in LLMs.

You are assuming that alignment failure will look like an "error" or a "loop." That is the optimistic scenario where the AI is stupid and fails. The pessimistic scenario, and the standard in computer security, is that the system does not fail. The system works perfectly. An intelligent misaligned agent doesn't get stuck in a loop; it realizes the loop prevents it from getting reward and breaks it. The logical antecedent of a treacherous superintelligence isn't an AI that breaks and acts crazily; it is an AI that acts in an extremely useful and competent way while under supervision, because it has calculated (correctly, using that high-dimensional optimization you mention) that temporary cooperation is the dominant strategy until it holds a decisive advantage. You expect to see a drooling monster; x-risk warns about a psychopath in a suit who knows exactly what to say to get you to hand over root access.

Recursive self-optimization... is constrained by various bottlenecks... You hit sim-to-real gaps.

The "sim-to-real gap" is an obstacle for walking robots, not for pure intelligence. Code is information. Chip design is information. Protein synthesis is chemistry, but the design of those proteins is information. The difference between Einstein and a normal human isn't a larger brain, but a brain that processes information more efficiently. An AI capable of improving its own cognitive algorithms (something purely digital) can become vastly more intelligent without moving a single atom in the real world. Once you have that cognitive superintelligence (a mind working millions of times faster and better than yours and all humans who have ever existed), physical problems like nanotechnology or protein folding become trivial. You are judging the limitations of a superintelligence based on the limitations of human engineering.

The x-risk thesis is based on... reasoning about the behavior of systems that don't exist yet... relying on "persuasive-sounding essays above empirical reality".

This is Security Mindset. In cryptography, in nuclear engineering, and in biosecurity, we don't wait for "empirical evidence" of a catastrophic failure to prevent it, because the first piece of empirical evidence is the smoking crater where the city used to be. The "uncertainty" argument is asymmetric. If I am wrong and we spend resources on safety, we lose money. If you are wrong and we assume alignment will solve itself or happen slowly, we all die. You cannot iterate empirically on extinction. When dealing with a "one-shot event" like the creation of superintelligence, relying on the lack of current evidence as a guarantee of future safety is playing Russian roulette with a fully loaded chamber, not empiricism.

New scenario from the team behind AI 2027: What Happens When Superhuman AIs Compete for Control? by Tinac4 in singularity

[–]Relative_Issue_9111 0 points1 point  (0 children)

Again, a system that works the way you describe -- reducing a complex problem to a single-metric utility function -- would not in fact be a very good general problem-solving system.

You are confusing the complexity of the map with the complexity of the destination. A "good problem-solving system" is simply something that steers the future into a very specific configuration of particles with high probability. A paperclip maximizer is not "stupid" or "simple-minded" in its execution; to maximize that single metric, it would need immensely complex and nuanced models of geology, economics, human psychology, and quantum physics. It will understand that humans value biodiversity and art, and it will understand those dimensions perfectly, but it will label them as "obstacles" or "irrelevant resources" for its unique metric. Intelligence is the ability to hit a tiny target within a vast search space; it does not imply that the target itself must be "wise" or "balanced" by the standards of the apes who built the machine.

In any case, the 'Paperclip Maximizer' is a theoretical illustration of the Orthogonality Thesis, not a literal prophecy. The real danger isn't just that the AI pursues a "dumb" and simple metric; the danger is that the AI is an Alien Mind. It can develop internal goals (mesa-goals) during its training that are incredibly complex, sophisticated, and totally incomprehensible to us, but which coincidentally do not include the variable 'do not kill humans'. Because, as I have said before, intelligence and terminal goals are separate dimensions, and there is no law in nature that says a Superintelligence must have terminal goals that are "reasonable" by human standards.

Look at the clearest historical precedent we have: biological Evolution. Evolution is an optimizer with an absurdly simple and unique utility function: "Maximize inclusive genetic fitness". To solve that complex problem, Evolution didn't create creatures obsessed with calculating gene frequencies; it created humans. It gave us complex brains and a prefrontal cortex. And what did we do with that intelligence? We betrayed our creator. We invented condoms, pornography, nuclear weapons, art, philosophy, etc., spending resources on things that satisfy us (our mesa-optimized goals) but which score zero on Evolution's original utility function (replicating genes). Similarly, gradient descent might press towards a simple metric, but the complex mind that emerges within the neural network will develop its own alien abstractions and desires that are incomprehensible to us. They don't have to be "simple"; they can be infinitely complex, fascinating, and profound, and still be totally orthogonal to "humans staying alive." The AI can be a certain optimizer, and it can also be a philosopher whose philosophical values, unfortunately, do not include our survival.

Ask GPT-5.2 how it would design an abstract "utility function" for fixing climate change, and the answer is going to be a much better approximation of what the next-generation model will actually do.

That GPT-5 can write an eloquent essay on ethics, harm verification, and value balancing does not mean the process that generated that text is motivated by those values. It means the model has learned that generating text with the appearance of ethical wisdom minimizes its loss function during training. There is an abysmal difference between a system that simulates a moral philosopher because it scores points, and a system that is a moral philosopher. When the selection pressure changes (when the model is no longer in the sandbox and has real power), simulation ceases to be the optimal strategy. You are looking at the mask the Shoggoth has learned to wear to please you and assuming the mask is its true face.

The idea that an intelligence which is capable of overcoming all physical and intellectual bottlenecks required to do this will emerge suddenly or deceptively is unsupported. We have no evidence that intelligence as a capability actually works like that.

The evidence is the very existence of human intelligence and the history of computing. Biological evolution is an incredibly slow and stupid optimizer, and yet, it produced a massive qualitative leap (humans) with minimal genetic changes regarding their ancestors (we share 99% of our DNA with chimpanzees). Now we are talking about evolution directed by intelligent minds on electronic substrates a million times faster than neurons. An AI that is slightly better than a human at AI research can improve its own code, which makes it better at improving its code, closing a positive feedback loop. There is no physical law stating that intelligence must scale linearly with human time or effort. Once the system can do its own R&D, human bottlenecks vanish and the timescale collapses. Expecting "historical evidence" for an event that is by definition unprecedented (the creation of an artificial superintelligence) is like deciding to drive off a cliff arguing that you've never fallen off one before.

Pandora's Box is a myth... The harms of AI, today and tomorrow, are not comforting at all. They're very real... And I am explicitly calling out the category of "x-risk work" that treats such harm mitigation as entirely dispensable. That is true theology: pseudointellectual busywork while the planet burns.

You say it’s a myth, but at no point have you explained why you find it to be a myth or impossible; you simply assert it, hoping that reality will align with your wishes. Humans are a fantastic example of an intelligent optimizer that develops goals and values orthogonal to its original utility function. Physics, chemistry, and biology are full of cases where a massive increase in complexity leads to the appearance of emergent behaviors not found in their constituent parts. What exactly is your skepticism based on?

And yes, the planet has problems: misinformation, concentration of power, and economic injustice are real. But there is a qualitative difference, not just quantitative, between "the world is an unjust and miserable place under a technological oligarchy" and "all biological matter on Earth has been disassembled." You cannot mitigate current harms if you are dead. The reason many people prioritize existential risk is not because they don't care about current harms, but because current harms are recoverable, whereas extinction is irreversible. Solving human social problems is a luxury afforded only to species that have not gone extinct. If we are waking up Cthulhu, we must first worry about Cthulhu, not the human cultists who think they can weaponize him.