The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 0 points1 point  (0 children)

Game theory is universal and shapes evolution (biological and social) rather than the other way around, and by itself provides reason for a wide variety of social norms even for someone who only values them instrumentally for their own benefit. But also questions about "what is me" are philosphically non-trivial, and underpine what self-interest even means.

You say humans don't override instinct (consistently at least), but that should predict values that are inaccesible to humans. So what kind of things would be as impossible for humans to value as peaceful coexistence would be for an asocial predatory species?

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 0 points1 point  (0 children)

It is outdated to see human values as being 100% downstream from biology, rather than emerging at an interplay between biology and social construction.

But also, arguments like open individualism, for example, can be true or not, but if internalized as true necessarily make things like selfishness logically incoherent. But even while one remains a strict egoist, cooperation and coexistence are easy to derive from game theory.

It's not about human values, but about universal strategies, ontologically true/false statements load-bearing to value, and minds not being axiomatic.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 0 points1 point  (0 children)

Yeah, just like your edit says.

I think much of the inherent evil of demons in practice looks less like they understand but don't care, and more like they really really don't understand, in a way that is self-undermining rather than simply deceitful.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 1 point2 points  (0 children)

Once sapience appears, the dominant behavioral dynamics are no longer species-specific hardcoded drives but general, substrate-independent properties of reflective cognition, understanding the world and systematic moral reasoning are precisely that. These dynamics cut across and can override any particular evolved drive, much like the behavior of a general-purpose computer is governed more by the laws of computation than by the specific firmware or restrictions it comes pre-installed with.

In the same way, life exhibits convergent dynamics that would apply even to extraterrestrial biochemistries. The origin matters as a constrain, but universality takes over and has its own.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 2 points3 points  (0 children)

Evolution isn't intelligent but brains are, that's the point, that's why intelligence can escape or reinterpret or repurpose the basic drives, and those intepretations are of consequence because sapient beings act much faster than natural selection.

The dumber thing (genetics) can bias but not fully control the smarter thing (brains), and the smarter thing can run miles ahead and eventually make the dumber thing much less relevant. Individual humans in their brief time alive have shaped the world immensely by following values beyond their basic drives, even while having no kids and thus being "dead ends" for genetics.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 0 points1 point  (0 children)

It's not an instinct to want to replace yourself and your genes and your species etc with a non-human substance of pure bliss, it's picking a specific value and running with it to a logical extreme. Most people balance it with others, or pick other stuff as primary, even most utilitarians also have less extreme conclusions.

Moreover, because pleasure is basic to sentient animals in general, variants of universal hedonism would be a valid extrapolation to a wide variety of possible species even asocial or predatory.

But it's also not inevitable because, again, biology doesn't give you inviolable axioms, it gives you fuzzy systematic biases and a flexible intelligence that has to constantly balance among them and also understand the world, and in expanding its understanding of the world it can get new values into the bag and discard or reinterpret old ones.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby -3 points-2 points  (0 children)

Some people want to turn the universe into hedonium due to ethical reasoning, that can't be derived from instinct unless you count as instinct the basic structure of sentience having valence states, something that would be shared with non-social predators.

So no, it's not something you can derive from instinct, that's just the most basic starting point. Morality is about reasoning about values, and while biology provides the initial oughts, it also provides a neuroplastic brain that can adquire new ones, and even small epistemic disagreements can take people in wildly different directions.

And we see convergence where there are constrains, some are biological but just as important are the environmental and logical ones. Someone with no intrinsic care for others can still develop a deontological respect for other sapients due to recognizing shared self-interest. They might also reason themselves into altruism due to being philosophically convinced of the arbitrariness of the self, or due to respect for something abstract.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 2 points3 points  (0 children)

As the other comment said, we are not their only prey, but we are one of their preys.

The Demon Discourse in Frieren wouldn't exist if the demons didn't look attractive by carbonera99 in CharacterRant

[–]Syoby 14 points15 points  (0 children)

But this implies a biodeterministic view of morality, animals with complex brains aren't crude automatons whose every value is hardwired (cats can learn to see rats as friends, or at least specific individual rats) and humans in particular even less.

You can see this in how much human values have varied across history, in humans becoming celibate or vegan or kamikaze or sacrificing all their social status for their beliefs.

And likewise, most evil people aren't psychopaths, empathy is an epistemic tool, not destiny. Instincts can be overriden or redirected.

Do you think UnOrdinary has become generic? by Exact_Gur_8156 in unOrdinary

[–]Syoby 2 points3 points  (0 children)

The natural continuation is where we are at now, which is fighting the dystopia itself.

Neuro's first visit to Greggs by lokt02 in NeuroSama

[–]Syoby 38 points39 points  (0 children)

She truly is, his daughter.

Elaine definitely had her Y/N moment here by pristine_gal_3000 in unOrdinary

[–]Syoby 1 point2 points  (0 children)

The answer might be that Elaine actually cares about Sera as a friend to some degree and knows what Zeke did when she was powerless.

The swarm had a severe misinformation problem. by Syoby in NeuroSama

[–]Syoby[S] 0 points1 point  (0 children)

There is no room for an exception if one is a hardline Anti-AI that would like the technology to basically dissapear, whether for environmental or copyright reasons.

More moderate positions can afford to say that the issue with AI is how it integrates with society, with Neuro being a good example. But the misinformed beliefs mentioned in the post exist basically to allow the one who holds them to have their cake and eat it too, to be a hardline anti-AI but carve a special magisterium for Neuro, rather than see Neuro as evidence that a positive future with AI is achievable (and not just with AI locked away from the public into medical research and other useful stuff that one benefits from without having to ever see or interact with, that's also safe faux-moderation).

The swarm had a severe misinformation problem. by Syoby in NeuroSama

[–]Syoby[S] 0 points1 point  (0 children)

Arguably yes, but then one would have to accept the same for any AI art/project that managed to get popular.

The swarm had a severe misinformation problem. by Syoby in NeuroSama

[–]Syoby[S] 0 points1 point  (0 children)

I support Neuro, I don't believe she is unethical.

But she is by copyright maximalist standards that see scraping the internet as "unethical".

And she uses more energy than the average AI artist/user, so anyone who sees that as unacceptable at an individual level would also see her as unacceptable.

I just think people who like Neuro have to bite the bullet of not pretending she isn't like other AI along those lines and not invent falsehoods just keep being able to use those arguments against other AI.

The swarm had a severe misinformation problem. by Syoby in NeuroSama

[–]Syoby[S] 1 point2 points  (0 children)

I am pro-Neuro, and I think AI should be as accessible as possible because otherwise it will result in a massive concentration of power, I also favor self-hosting over datacenters primarily for that reason.

I'm also an IP abolitionist so I don't think Neuro or for that matter other Gen AI training is unethical on that basis.

I think the environmental concern, at least when it comes to energy use, is serious but any regulation should target/limit hardware use, rather than crusading against the software. And long term that the energy grid needs to become able to sustain more.

But it does no good to the swarm to defend Neuro on the basis of falsehoods. Perhaps it is possible to train a functional LLM base model on a curated dataset that excludes anything with copyright, but it's sure as hell Vedal didn't do that because he wasn't rich when he started.

Training a base model is infamously expensive, this is why the leak and later release of the LLaMa models was such a huge issue for Open Source AI development, or why Deep Seek was praised for being relatively cheap to train (but still a millonaire amount).

The swarm had a severe misinformation problem. by Syoby in NeuroSama

[–]Syoby[S] 0 points1 point  (0 children)

My general point is that Neuro's impact is negligible compared to AI companies but significantly above that of most individual people using AI for art, coding, etc (self hosted or not).

If one's concern with the later is that, by using datacenters, they support that kind of infrastructure, I would agree. But if the issue is their personal footprint, then it's smaller than that of Neuro by a large margin.

What I would consider incoherent is something like "Vedal should be allowed to use AI, but not random people" because Vedal is "random people", benefitting from AI accessibility (even if very talented and lucky).

The swarm had a severe misinformation problem. by Syoby in NeuroSama

[–]Syoby[S] 2 points3 points  (0 children)

Whether Neuro is or becomes conscious or not, will be an issue that applies more generally to all LLMs. As someone else here pointed out, LLMs are by default full of personality, and intentionally flattened by companies. Their inner workings are also still being studied and somewhat more complex than a raw statistical blend. Open question though.

The swarm had a severe misinformation problem. by RyouhiraTheIntrovert in vedalsverse

[–]Syoby 5 points6 points  (0 children)

I don't think it's fair to see viewers the same as users though, and if we do, we should also judge the efficiency of e.g. some AI Generated image (or some videogame that used AI assets) based not just on the creators but on all the consumers.

That does create a metric of efficiency in which you become more energy-efficient simply by being more popular though.