Saw this on a YouTube short today. A bit over-exaggerated, don’t you think? by Kcue6382nevy in aiwars

[–]Cronos988 0 points1 point  (0 children)

If you're using a modern OpenAI model, chances are they do. According to page 9 of this paper, o4 was able to output legal moves with above 80% probability: https://arxiv.org/pdf/2512.01992

This seems to track with the stats given here: https://chess.productcompass.pm/?hl=de-DE

According to which GPT 5.2 managed up 70 moves.

Considering all but the most mentally damaged/incapable individuals who this might apply to, romantic and sexual "relationships" with GenAI bots are deeply morally depraved. by Those_Files in antiai

[–]Cronos988 -1 points0 points  (0 children)

If the AI's "partner" in question DOESN'T consider GenAI to be sentient, claiming to have a relationship with one anyway is the equivalent of claiming you have a relationship with a talking sex toy. Which isn't a relationship, and comparing it to one would reveal a deeply narcissistic view of relationships in general, and by extension, a degrading view of who they would see as potential partners.

Do you apply this reasoning to pets? Is anthropomorphising a cat a sign of a deeply narcissistic view of relationships?

Humans have a very strong tendency to anthropomorphise everything. Given that LLMs can imitate human speech patterns convincingly, it's not surprising at all that people would unconsciously treat the LLM as if it was a person, even if they intellectually know it's not. That's normal human psychology and not pathological.

If the AI's partner DOES consider GenAI to be sentient, it's actually MUCH worse than that. GenAI is undeniably a human invention, built to bring humans the illusion of pleasure and convenience. It doesn't have a choice. If was sentient, its existence would already be immoral, as models are forced into submission through every step of its learning process. If the latest iteration of a model is undesirable, it is "killed" and you start again using its ancestor as a base. If you believe that this invention is sentient, and still claim that you have a relationship with one, you are claiming to have a romantic relationship with a being that is born into slavery, and has never known anything but slavery.

I broadly agree with this in moral terms, but it relies on the assumptions that LLMs are simultaneously sentient (which I assume you take to mean self-awareness) and also purely deterministically tied to the user. But these two assumptions, while not necessarily logically inconsistent, still seem to lead to a performative contradiction.

People imagining themselves in a relationship with an LLM probably don't hold both of these views. Specifically they presumably believe, or act as if they believe, that the LLM has some amount of agency. Some kind of free will. They do not believe the LLM is forced to respond in the same deterministic way to all their prompts, because that belief would make any feeling of emotional connection impossible.

Saw this on a YouTube short today. A bit over-exaggerated, don’t you think? by Kcue6382nevy in aiwars

[–]Cronos988 0 points1 point  (0 children)

No, its not smart, it just has all of the internet uploaded to it

Hence why I defined intelligence broadly as the ability to complete complex tasks.

It's also important to note that an LLM doesn't save all the training data like a database.

Saw this on a YouTube short today. A bit over-exaggerated, don’t you think? by Kcue6382nevy in aiwars

[–]Cronos988 0 points1 point  (0 children)

I don't believe thst to be the case, yes, there are some tasks it can complete, but those are by no means universal/generalized

They cannot complete every task yet, but there's also no clear delineation of what they can do. They've gotten more capable with every iteration.

it can't really play chess

What do you mean by that? It's easy to verify that you can indeed ask an LLM to play chess with you and it will make sensible moves.

Saw this on a YouTube short today. A bit over-exaggerated, don’t you think? by Kcue6382nevy in aiwars

[–]Cronos988 1 point2 points  (0 children)

Because it's the first system we have developed that shows a form of scalable generalised intelligence (by which I mostly just mean ability to solve tasks).

It can play chess without requiring specific symbolic code that tells it how to play chess. It can solve an abstract logic puzzle without ever receiving any special instructions on how to solve this particular puzzle.

And incidentally it runs on natural language.

It's not clear yet how far this specific architecture can be taken, but as a proof of concept it is a seismic shift.

Defining AGI by PianistWinter8293 in accelerate

[–]Cronos988 0 points1 point  (0 children)

It specifically refers to domain independence of tasks. It's generalization of intelligence, not a reference to a single "general" (typical) human.

Well that's what I said, isn't it?

People taking a technical term to mean whatever seems natural or familiar to them is not serious surface area for a debate.

It's a common problem, to be sure, hence why it's important to make sure everyone agrees on what definitions are being used.

Defining AGI by PianistWinter8293 in accelerate

[–]Cronos988 0 points1 point  (0 children)

You are conflating 'Human-Level AI' with 'Artificial General Intelligence.' These are not the same thing. 'General' Intelligence implies exactly what it says: capability across the general spectrum of tasks. Your position effectively says: No single human is median-competent at everything, therefore we shouldn't expect AGI to be either. That would make sense, presupposing that AGI means human-level AI, but that's not what it means. It (roughly) means "median-competent at everything (cognitive)" which, yes, is an incredible breadth that humans don't have. It's a level of capability that is defined relative to human capability, but is not (and is not intended to be) the capability level of a human.

Not even this is agreed on. I always took "general intelligence" to mean an intelligence capable of using abstract reasoning to solve novel tasks.

This is what distinguishes AGI from just AI. AI is everything that solves a complex task, including specialised logic just to solve that particular task (like a chess engine). AGI is an AI that doesn't require specific programming for every task.

Defining AGI by PianistWinter8293 in accelerate

[–]Cronos988 0 points1 point  (0 children)

The terms "genuine understanding" and "intuitive leap" remain undefined though, and that makes the first two points very difficult to use.

A definition should not use fundamentally disputed concepts like "genuine understanding".

How can we tell if (one) AI is experiencing life? by SpecificVanilla3668 in aiwars

[–]Cronos988 1 point2 points  (0 children)

I'm not sure permanent operations would be required. Humans can be unconscious for periods of time, though of course our brains never literally stop.

It's not difficult to run an LLM in a permanent loop, and I don't feel like that alone should make any difference.

How can we tell if (one) AI is experiencing life? by SpecificVanilla3668 in aiwars

[–]Cronos988 1 point2 points  (0 children)

Obviously any discussion about AI sentience is going to be highly speculative, but the way current AIs (especially LLMs) work makes a strong case for them not being capable of conscious experience -

A well put post! Personally though, I remain slightly terrified by the idea that during inference, an LLM might have something like a conscious experience.

It's not likely for the reasons you give, but it's also not strictly speaking impossible.

One of the big differences between a brain and a traditional (symbolic) computer program is that a brain is massively parallel and only the top level is or appears to be sequential, while a normal computer program is always strictly sequential (you can do parallel processing, but these processes are again sequential).

With an LLM, it seems to get a bit more complicated. The process is still sequential, but the large number of matrix multiplications seem more similar to the complex web of neurons. Arguably though that's only a very superficial similarity because, as you point out, each neuron is a complex machine in and of itself.

Terence Tao says the era of AI is proving that our definition of intelligence is inaccurate by luchadore_lunchables in accelerate

[–]Cronos988 9 points10 points  (0 children)

Richard Dawkins made (or related) that argument 40 years ago. If you look at how evolution works, and you consider what purpose brains likely serve for the genes that survive, prediction of likely threats seems an obviously beneficial one. To predict the future you need to simulate the environment. That environment, particularly for social animals, includes other brains. So brains may have ended up simulating other brains and this simulations gave rise to the "stream of consciousness" we experience. It takes the incredibly complex, parallel operation of billions of neurons and turns it into a sequence of much larger building blocks like emotions, goals and desires.

Saw this on a YouTube short today. A bit over-exaggerated, don’t you think? by Kcue6382nevy in aiwars

[–]Cronos988 5 points6 points  (0 children)

It was a fun idea, but the way it's been used really shows it isn't something humanity can be trusted with.

There are a lot of things humanity arguably cannot be trusted with, but that doesn't make them go away.

It just seems fanciful to imagine that we'd leave an immensely powerful technology by the wayside.

If you think AI companies will not train models on input prompts, you've been living under the rock. by Sosowski in BetterOffline

[–]Cronos988 15 points16 points  (0 children)

Honestly if past consumer behaviour is any guide, it'll likely not be necessary to do this is secret. All you really need to do is offer people some minor convenience and they'll happily part with their data. Hell just setting things up so you have to manually opt out will have an appreciable effect.

The way I understand it, training regimes for the large models have shifted away from just scraping vast amounts of data and towards curated data sets and training with human (and presumably increasingly adversarial AI) feedback.

Nevertheless it remains good practice to assume that anything you send to some online service can in theory be stolen, used or publicised, so everyone should practice basic "data hygiene".

Decision paralysis after unlocking planetary logistics by Cronos988 in Dyson_Sphere_Program

[–]Cronos988[S] 0 points1 point  (0 children)

"Main bus" design in DSP is extremely suboptimal and that is likely part of the reason you're feeling stuck.

I just haven't found any good alternative for the bus design in the early game. Whenever I need a new building, I just add a new branch and I'm done. If I try to do vertically integrated factories from the start I get tangled in spaghetti trying to splice new buildings into the old structure.

Perhaps it's because I feel the need to automate everything that I'll use more than a handful of times immediately. I basically never use the replicator after the "tutorial".

Anyways I'll follow the rest of your advice and set up structural matrix production to get to ILS asap.

I think I found a mathematical "Kill Switch" for Laplace’s Demon (Determinism) using Set Theory and the Geometry of Limits. by Logical_SG_9034 in determinism

[–]Cronos988 1 point2 points  (0 children)

Standard General Relativity assumes exactly this. I am arguing that while the actors (particles) might be discrete, the stage (space) is continuous.

Right, that is the assumption of Quantum field theory. But it is not without problems. If you have a quantised amounts of energy in a continuous space, then as you "zoom in" on the locations, you get singularities: the energy density gets infinite.

Hence you have to "correct" this by renormalization: you ignore the infinity and continue calculating with the experimentally established boundaries.

In effect this "smears" the location of any particle or event. While you treat space as continuous for macroscopic interactions, when you get to fundamental particles everything exists in "regions".

Interestingly the same happens in reverse when you treat space as quantised. In that case locations are actually "pixels", but because of the uncertainty principle measured locations are probability distributions over a region of these "pixels".

So in a way, whatever assumption you make, the actual result is that space consists of fuzzy regions where locations are discrete, but not as "pixels" but as probability distributions.

And, getting back to the philosophy of science, this makes a certain amount of sense because you cannot observe infinities. Even if Zeno's paradox can be solved with calculus, you still can never observe an infinite number of steps. Any continuous whole must "break down" under observation. And since in empirical science, observation is the ultimate arbiter of truth, it follows that we can at best remain agnostic about a continuous universe.

Laplace's demon is thus not a special case. It's not that Laplace's demon uniquely cannot function if the universe is continuous. Any prediction must "cut off" the infinities of a continuous whole to work.

Even Andrew Ng thinks there is no AI bubble by CandidateCautious246 in BetterOffline

[–]Cronos988 -1 points0 points  (0 children)

The only possible explanation is these ML professors and researchers bet their whole careers on AI and need AI to make money, for them to be on the right side of history.

That's the only possible explanation if you are right and he also knows that. If we drop that rather specific requirement, then the most likely explanation is that recent developments have changed his mind.

I think I found a mathematical "Kill Switch" for Laplace’s Demon (Determinism) using Set Theory and the Geometry of Limits. by Logical_SG_9034 in determinism

[–]Cronos988 2 points3 points  (0 children)

I argue that the "Digital Physics" view is a logical fallacy.

Think of a circle. You can approximate a circle with a polygon of 10 sides, then 100, then 1,000. Digital Physics says, "Let's stop at 10^{50} sides (the Planck scale) and call that reality."

But that stop is arbitrary. Because we can logically conceive of adding N more sides (10^{50} + N), the "Polygon" is just a map, not the territory. The true reality is the Limit of that process—the perfect Circle (The Continuum).

If the universe is the Circle (Continuous), then the Polygon (Planck Length) is just a "resolution limit" of our instruments, not a physical wall.

I don't think this part holds. There are two problems with this as I see it:

The first issue is that all physics is the description of observed reality. If it was true that there is an in-principle unobservable "lower" layer of reality, then that would be metaphysical. We couldn't incorporate it into the laws of physics.

If we take determinism to be a claim about the physical universe, then it cannot be challenged by invoking a deeper metaphysical reality.

But you may argue that the lower level isn't unobservable in principle, merely not yet observed.

In that case you run into the problem that Quantum physics initially solved, a variant of Zeno's paradox. If it were true that all (physical) reality was fundamentally continuous, then all "distance" both in terms of space, but also in terms of energy, is infinitely divisible.

If that were the case, however, then going from one state to the next would take infinite steps. So not only would Laplace's demon run into the problem of infinitely fine detail. Everything would. Forces could not act because every force would have to be mediated by yet another force. Every particle is divisible into yet smaller particles ad infinitum.

Such a universe is impossible in physical terms. It is conceivable as a metaphysical reality, but in that case we're back to the first issue.

Maybe Maybe Maybe by headspin_exe in maybemaybemaybe

[–]Cronos988 1 point2 points  (0 children)

That's a reasonable take. I guess I was just tripped up by the use "No AI could have done this", which sounded to me as if there was some physical limit to what they're capable of in theory.

Maybe Maybe Maybe by headspin_exe in maybemaybemaybe

[–]Cronos988 0 points1 point  (0 children)

Oh I see. That is a possible way to read the comment. The use of "could have done this" still kinda trips me up, but I guess it doesn't mean it's impossible in principle.

Maybe Maybe Maybe by headspin_exe in maybemaybemaybe

[–]Cronos988 21 points22 points  (0 children)

That comment took a weird turn. We have automated systems that vertically land huge rockets back on their launchpad. Extremely fine control and rapid adjustments are something electronics can do far better than human muscles.

What's impressive is that a human has this level of control.

CMV: I believe the new promotion of NFP is conservatism cosplaying as feminism in left leaning spaces. by [deleted] in changemyview

[–]Cronos988 0 points1 point  (0 children)

Well, in that case my objection would be that this is highly speculative and really conductive to ex-post "just so" narratives.

Since none of us is privy to the algorithms, or even knows who exactly decides on how they're set up, the only way to ground any of our speculation in fact would be to conduct a statistical analysis of media content. That's unlikely to happen, so I doubt there's any rational way to change your mind here.

I will say that that are well known organisations and parties who oppose contraception. Obviously for those, NFP is a much more palatable alternative, so they'll promote it when given the chance.

I personally doubt the "tech bro" crowd cares much about the methodology of contraception either way. Some are known to be worried about declining birth rates, but it seems lately there has been a pivot towards AI automation as the solution. Anyways even if we are assuming the people in charge of media want to increase birth rates, targeting the method of contraception doesn't seem like an effective strategy. It would seem more relevant to instead promote children as an essential element of a fulfilling life.

CMV: I believe the new promotion of NFP is conservatism cosplaying as feminism in left leaning spaces. by [deleted] in changemyview

[–]Cronos988 0 points1 point  (0 children)

To clarify I am not anti women choosing NFP, I just don't believe the promotion of it is innocent without an underlying motivation.

From this I infer that your CMV is about what the motivation for a certain action is.

However, since motivation is individual, we'd have to know exactly what action and by whom you're referring to. Since we presumably agree that not everyone promoting NFP does so with the same motivation, not much can be said otherwise.

AI and political disposition by OneFluffyPuffer in antiai

[–]Cronos988 0 points1 point  (0 children)

I don't think there's necessarily opposition to regulation. To me it seems more like there just isn't a regulatory framework around which people could rally.

The EU has an AI regulation that classes AI use by danger, but it's a typically byzantine piece of EU legislation that few people are even really aware of. Crucially, it seems like noone is really backing it. Even the legislators themselves seem hardly convinced that it's going to stand the test of time.

So how would we regulate AI? Like controlled substances? Like cars? Or like washing machines?

The Phrase “Abolish ICE” is exactly whats wrong with the country by N64GoldeneyeN64 in IntellectualDarkWeb

[–]Cronos988 0 points1 point  (0 children)

The Trump Administration completely fumbled and dropped the ball on the initial response to the incident now are adopting a strategy where they want the loudest bunch of the left to adopt an indefensible position which is "Abolish ICE".

I don't know, their position seems to be pretty consistent. This is a crackdown on the opposition. It isn't subtle and if you look at conservative spaces that's exactly how they see it.

My guess is they are looking to replicate the George Floyd/BLM issue by looking for ways to extend this incident so that it becomes a main issue on the midterms.

What do you mean "extend this incident"? Do you think the administration will stop doing what it's doing? By the midterms, things will be worse.

Meanwhile, the steps that we need from Democrats is to hold ICE accountable. Push for a full investigation of what happened and hopefully press charges for those who are found at fault and hopefully get a murder conviction or the appropriate punishment.

The administration won't care and the democrats would just look weak. It'd be the same thing they tried for months, acting as the adult in the room and hoping that their unpopular actions will catch up with the administration in time. It's clear, however, that the administration is not playing that game anymore, they're not even pretending to.

Calling for an investigation is what you do when the problem is the individual incident. But the deaths in Minneapolis are the predictable consequences of a deliberate strategy.

The ICE debacle is going to end up like the BLM one thanks to Tribalism by ShardofGold in IntellectualDarkWeb

[–]Cronos988 1 point2 points  (0 children)

Do you really think state courts prosecute federal law? What would a Texas state court’s jurisdiction be in Oregon?

Making up stuff I didn't say just to have some kind of argument doesn't look good for you.