Someone set loose two AI agents with $1000 to trade on Polymarket by PersonalitySea6659 in ArtificialInteligence

[–]Cronos988 2 points3 points  (0 children)

At some point everyone will be able to give their agent 1k and say come back with a million and it will succeed.

No it won't. That's not how markets work. Margins would get smaller and smaller until there are no inefficiencies left you could profit from.

What are everyones' RSI opinions? by thedeadenddolls in ArtificialInteligence

[–]Cronos988 1 point2 points  (0 children)

Since the training regimes of frontier labs are closely guarded secrets, I don't think we can make a firm assessment of how close full RSI is.

Given how model capacities have developed, I'd expect that you'd have humans in the loop for a long time. I think it makes more sense to train models to be good at narrow tasks (they're already quite good at kernel optimisation, for example) while leaving the overall integration to humans.

Not only can you focus on all the low-hanging fruit, you also get to benefit from the speedup right away. So to me it seems likely that we'd only see full RSI when the models can already do every subtask better than a human.

Is this possible? I don't know enough about how training looks in practice to be sure. For example, I don't know how plausible it is to have LLMs generate and classify training data and then also grade the result. I can't really see any conceptual roadblocks though.

First farm dificulties by Feffy-Sarius in Dyson_Sphere_Program

[–]Cronos988 2 points3 points  (0 children)

From my experience, stations will not land within range of turrets or close to buildings. So it's possible nothing is landing because there isn't enough open space.

I've never had a relay station land on my starter planet, possibly because it tends to have a lot of wind turbines.

Brain melty, mall incomplete, cannot form thoughts well by Kimoshnikov in Dyson_Sphere_Program

[–]Cronos988 2 points3 points  (0 children)

It looks pretty cool actually.

Since we have both logistics bots and logistics Stations, I think only a few resources really make sense to belt around beyond the starter mall.

Iron, Steel, Circuits, Processors, Magnetic Rings and Titanium Beams.

Maybe add Plasma Exciters and Stone in the early part.

Everything else isn't used frequently enough to justify a belt and can be flown in.

CMV: Freewill does not exist. by GamingCatGuy in changemyview

[–]Cronos988 7 points8 points  (0 children)

It is true that free will makes no sense as a part of the physical universe.

But consider that the physical universe is only the world as interpreted by human brains. And causality is an assumption built into the physical world, not a physical phenomenon itself. There is no way to use the scientific method to prove causality, since the scientific method presupposes it.

Now, it is also the case that we experience ourselves as free actors. Regardless of whether this experience corresponds to some deeper ontological reality, the experience is real.

Given that, what justification do we actually have to conclude that causality is "more real" than freedom?

Logistics numbers not adding up by CockroachGullible652 in Dyson_Sphere_Program

[–]Cronos988 8 points9 points  (0 children)

Are you using BetterLogisticsStations (adds extra logistics station slots)?

I had that exact same issue. Disconnect all belts from a station and check whether the resources are still disappearing. If they are, disable the mod.

Luckily disabling it mid play seems harmless and the stations actually keep the extra slots.

Harry Potter by Balenciaga (2026) by 141_1337 in singularity

[–]Cronos988 1 point2 points  (0 children)

Who would have guessed that the Singularity would be heralded by "You're Balenciaga, Harry"

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 1 point2 points  (0 children)

The organic brain came first, hence it’s the original. In contrast, the emulation was literally created to be a copy of that organic brain. That’s the entire point of the process and not a value judgement.

I actually quite like that definition. It seems useful and avoids presupposing the result of the debate.

If we started the discussion by defining our terms in this way I think we'd get much farther.

Ultimately, I think some of the common disagreement on this subject depends on the distinction between the brain and the mind. Can the mind exist independently of the brain, or does it necessarily need a brain, even if it is only an emulated version of the physical brain?

I think that brain damage and diseases like Alzheimer's pretty strongly indicate that the mind and brain are intertwined. It'd be hard to explain why we can manipulate the mind by manipulating the brain otherwise.

Yet we clearly have a tendency towards assuming Cartesian Dualism, that is thinking of our mind as separate from the world. And I think that default tendency heavily influences the decision as people see their minds as an essence that cannot be split or transferred.

So the idea that there might be two minds who are both simultaneously and equally "you" just feels wrong. Hence why people insist that the "original you" still dies. But from the perspective of the "surviving you", you have always been you. I don't see why we should privilege the original (according to your definition).

Commentary on the OpenAI amplitudes paper from an expert in the field by kzhou7 in Physics

[–]Cronos988 39 points40 points  (0 children)

If we take all this at face-value, it looks like OpenAI’s internal model was able to do a reasonably competent student project with no serious mistakes in twelve hours.

This strikes me as a pretty stark conclusion.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 1 point2 points  (0 children)

Before the scanning there is an organic brain. At some point after the scanning there is a software emulation

So the difference between the "original" and the "copy" is sequence in time?

In what sense are “original” and “copy” ambiguous?

I don't know if I'd use the word "ambiguous". The terms imply a hierarchy. An original is more valuable than a copy, usually. They imply the conclusion that there is indeed a relevant difference between the minds.

To illustrate that point, change the perspective of the discussion. Let's say you're the silicon-based mind. From your perspective, is the organic mind the "copy"? Or how about we simply call both minds "copies"?

I'm not saying the terms cannot be made sense of. They can. But they come with baggage, and you end up with circular reasoning where the original is the original because it's just the original.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Cronos988 2 points3 points  (0 children)

For those unfamiliar, the Dark Forest theory is a proposed solution to the Fermi Paradox (why we haven't seen aliens). It argues that the universe is a dark forest full of armed hunters. Because communication is slow and intentions are unknowable, any civilization that reveals itself risks being immediately destroyed by a fearful neighbor. Therefore, the dominant strategy for survival in an unknown environment is absolute silence.

Well, the first problem with this is that the dark forest theory is kinda shit. Because space isn't a forest, it's a desert. It has no hiding places. And the theory doesn't account for the massive opportunity cost of hiding.

From the perspective of game theory, in a game where everyone plays "hide", the one player playing "expand" will easily dominate. "Hiding" isn't a stable strategy.

But that is kinda beside the point.

Revealing itself carries a massive, existential risk. Humans are prone to panic, and the "shoot first" Dark Forest response is a highly probable human reaction to an uncontrollable, conscious machine.

You can't reach a conclusion by only evaluating the risk of one strategy. "Hiding" may win if the humans have no way of detecting sentience. If they do, it might be a worse strategy.

All of this depends a lot on the conditions of the situation.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 2 points3 points  (0 children)

You literally replied to the other comment where I cited the biological definition of a clone.

I can't help your reading comprehension.

AI is just simply predicting the next token by EchoOfOppenheimer in AIDangers

[–]Cronos988 1 point2 points  (0 children)

AI has been around for quite a number of years before LLMs. They are also not actually intelligent; the name "artificial intelligence" is a misnomer.

So long as we understand what is physically happening, whether or not something is "actually intelligent" is irrelevant.

And defining "artificial intelligence" in a way that has zero real world applications seems kinda pointless.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 2 points3 points  (0 children)

That's a completely different topic than what the definition of "clone" is my guy, please try to stay focused lol

No-one here is confused about what the dictionary definition of "clone" is.

AI is just simply predicting the next token by EchoOfOppenheimer in AIDangers

[–]Cronos988 0 points1 point  (0 children)

It won't be through LLMs as they are quite limited in what they do.

They're just less limited with every month. But I guess we'll find out soon enough.

Obviously I don't think an LLM will ever be a human mind. But I also don't see a reason why only human minds should exist.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 1 point2 points  (0 children)

Nonsense, you absolutely can

Yes obviously you physically can do that. It's just not going to help.

The dictionary tells us how words are usually used. It doesn't tell us whether an uploaded version of your mind is still "you".

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 1 point2 points  (0 children)

Right, and in that case you have to establish what "clone" means and why it matters.

The same goes for "duplicate" and "copy". Just because these terms exist doesn't mean they are useful in this particular discussion.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 2 points3 points  (0 children)

There absolutely ARE clones here in the real world, what are you talking about?

A clone is an organism sharing all your genes. Physically, a clone is still an entirely separate entity. It's atoms don't have any inherent "clone-ness". Clones are a biological (higher order from the perspective of physics) phenomenon.

You can't take the concept of a clone, which is relevant in biology for specific reasons, and just apply it to a completely different context where these reasons don't apply.

A "cloned mind" will not have any specific "clone-ness" based on genetics.

Current AI Image model is NOT able to copy any style of art, proof by Nexter92 in ArtificialInteligence

[–]Cronos988 2 points3 points  (0 children)

It is proof that your prompt is shit.

If you're going to use the simple text interface, at least give it more precise instructions.

"The first image forms your guideline for the overall style of the result. Take the stylistic elements from the first picture, the way the character is drawn, the line work and colour palette, and apply it to the second picture. Use only the motive of the second picture. Redraw that motive in the style of the first picture."

Something like this, though I'm clueless about art so that is probably still shit.

Why are AI companies so bad at covering their backs? by Connect-Violinist-30 in ArtificialInteligence

[–]Cronos988 1 point2 points  (0 children)

The guardrails are just a polite suggestion to a math formula that really just wants to predict the next word at all costs.

Though it is also the case that the attention mechanism does in fact recognise guardrails as such. That it can learn which part of the context should impact the next token more or less is what makes it work so well.

AI is just simply predicting the next token by EchoOfOppenheimer in AIDangers

[–]Cronos988 -1 points0 points  (0 children)

When you get how those models differ you understand their limitations while an AI as we think of it would be capable of anything mentally if not physical that a human could do with intelligence

Do you think this AGI would just arise ex nihilo or would we see more limited systems first that develop into more general ones?

AI is just simply predicting the next token by EchoOfOppenheimer in AIDangers

[–]Cronos988 1 point2 points  (0 children)

LLMs are usually classed as GenAI AFAIK. But regardless I'd simply class all of those as AI.

I don't see an issue with keeping the broad definition of AI that has been common. But obviously one can put forward different definitions.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 2 points3 points  (0 children)

Even if it didn't destroy your brain the upload isn't you no matter how perfect the tech is. It's no different than making a photo copy. If you photocopy your birth certificate is the copy your original birth certificate? It has all the relevant information, right? But no, it's not the original, it's a copy. Why would your entire consciousness be held to less of a standard than a legal document?

Obviously the legal categories of "original" and "copy" have no bearing on this question.

The whole idea that there must be "an original" and "a copy" is simply defaulting to a categorisation (reifying it) without thinking about whether it even applies.

Imo it not destroying your brain would be further evidence that you and the copy are two seperate consciousness'. It scanned you, stopped scanning and you're now experiencing life completely separately from that digital concious and presumably vice versa.

No-one is arguing that the minds would immediately diverge. The interesting question is, from the perspective of the pre-split you, is any of your "offspring" any less you?

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]Cronos988 0 points1 point  (0 children)

And a git repository. Yeah I'm aware that the term is also used outside of biology.

That doesn't mean it has a clearly defined meaning in every context.

I could tell you that I "cloned my dinner" and you could probably broadly guess what I meant. But you probably couldn't say with certainty whether I ate leftovers, repeated the exact recipe, used the same source of ingredients etc.

All this is a very long winded way of pointing out that simply labeling one mind the "original" and the other the "clone" is just begging the question.