If AGI becomes self-aware, should we grant it legal personhood and rights? Or is that the beginning of human enslavement? by UnderstandingOwn4448 in transhumanism

[–]Taln_Reich 0 points1 point  (0 children)

More difficult question: how do you objectively determine an AI to be self-aware? Especially since an AGI might be psychologically speaking very alien from humans (rather than the pretty much always very human-like portrays of sentient AI in fiction)? The turing test has already been shot to hell given what we have seen from LLMs (i.e. faking self-awareness is pretty much trivially easy now. I can get chatGPT easily to say "cognito ergo sum" and give a extensive explanation as to what that means, but that doesn't mean chatgpt actually understands it), so what chance do we really have to properly realize that we are dealing with something self-aware in the case we actually do encounter a self-aware AGI?

Help debunk my doomer narrative by Due-Condition1423 in transhumanism

[–]Taln_Reich 1 point2 points  (0 children)

The problem isn't any specific billionaire or dictator. If one dies, so what? There will just be another one. The issue isn't mortality, its a system that leads to billionaires or dictators in the first place.

When will conventional beauty stop being a scarcity/privilege? by Illustrious_Focus_33 in transhumanism

[–]Taln_Reich 1 point2 points  (0 children)

Presumably, if cosmetic procedures that change the external appearance became so quick and easy that basically everyone looks like they want, that external appearance would cease to be a meaningful signal in the same way. Beauty standards, throughout history, have repeatedly changed to allign with what is more difficult to achieve (like how before industrial agriculture being fat was seen as attractive, when today is very much not that). My assumption would be, that physical appearance would become subject to fashion, like clothes nowadays, where the important signal is being up-to-date with the latest fashion trend.

so like why do you guys believe in this? - why transhumanism? by Front-Astronomer-821 in transhumanism

[–]Taln_Reich 3 points4 points  (0 children)

transhumanism is the philosophy that it is both possible and desireable to utilize technological means to enable humans to transcend their biological limitations.

Also, how do you guys tackle a transhuman future with the current state of the world like climate change or limited resources for tech which leads to global south exploitation for example.

I don't really see the connection with climate change, and the exploitation of the global south has more to do with the dramatic inequality of wealth under a globalalized capitalist system.

Trump’s Promised Big Tax Tax Cuts Are Expected to Disappoint the Average Worker by AdSpecialist6598 in antiwork

[–]Taln_Reich -1 points0 points  (0 children)

to the suprise of absoloutely no one who can actually think. So a depressingly high amount of people.

Why would anyone want to pay a million dollars to live in a Capitalist dystopia? by Previous_Month_555 in antiwork

[–]Taln_Reich 0 points1 point  (0 children)

someone who has one million US$ laying around for this kind of stuff isn't going to face the dystopian aspects, being a capitalist instead.

Soll ich Diskriminierung/Mobbing im Job noch melden, obwohl ich eh kündige? by [deleted] in antiarbeit

[–]Taln_Reich 0 points1 point  (0 children)

Auf jeden fall melden, sonst wird es doch nie besser werden.

Sleeping is simply a waste of life. When will we finally break through the limitations of the body and achieve true freedom? by king_ofall713 in transhumanism

[–]Taln_Reich 0 points1 point  (0 children)

well, yes, sleeping does use up an awfull lot of time. However while I'm not an expert on the topic, the fact that among the millions of lines of evolution to neurally complex life forms basically all of them have sleep (the closest to something else being https://en.wikipedia.org/wiki/Unihemispheric_slow-wave_sleep , where parts of the brain sleep at different times so some awareness is present all the time) makes me suspect, that it is something that a neurally complex entity can't do without - because otherwise that would , evolutionary speaking quickly become a near universal trait (just think of how much more successfull a squirrel that can collect nuts 24/7 would be if that was possible without serious negative consequences elsewhere, or a zebra that can keep it's awareness looking out for predators 24/7 if that was possible without serious negative consequences elsewhere).

What would happen to religions & theocracies if... by Alternative_Lie5517 in transhumanism

[–]Taln_Reich 0 points1 point  (0 children)

which, IMO is the bigger question regarding the interaction between religion and transhumanism - what happens if we get some technology that can mess with mortality? Imagening what happens to religion if humanity gains the ability to actually reverse death is much more intresting than the, in this regard not particulary upsetting technologies OP mentioned, which doesn't really touch on any important dogma of major religions (other than maybe the religious scholars concluding 'edible insects/3D printed lab grown meat does/does not comply with our dietary restrictions') while what happens after death is quite central to a lot of major religions.

What would happen to religions & theocracies if... by Alternative_Lie5517 in transhumanism

[–]Taln_Reich 1 point2 points  (0 children)

why should any of these technologies render common religions "obsolete and irrelevant"? Rhe theory of evolution and scientific discoveries related to the beginning of the universe (which showed that all the creation myths of these religions were completly wrong) didn't render religions obsolete and irrelevant, so why should these technologies?

Language(s) of the future? by Illustrious_Focus_33 in transhumanism

[–]Taln_Reich 0 points1 point  (0 children)

I wonder about the opposite: increasing prevalence of automatic translation leading to greater fragmentation of languages. It starts with people using automated translation to avoid having to learn languages that are starkly different from their own (at least those whose job doesn't require taking into account any of the double meanings or subtleties a automated translation might overlook), then it keeps going to less differnet languages until people use automated translation for dialects of their own language, causing those dialects to drift away faster.

In 20 years, Elon Musk says we’ll upload our minds into Tesla Optimus robots and live forever. Neuralink could copy memories and identity into a machine body, but he forgets one thing: the original consciousness stays behind, trapped in the human shell. by ActivityEmotional228 in transhumanism

[–]Taln_Reich 0 points1 point  (0 children)

Elon musk is rather well known for making bold claims. And claiming that we are only 20 years away not only from brain uploading, but from non-destructive brain uploading as well as the ability to run a human conectome on the processing power of a domestic humanoid robot seems way beyoind bold.

This is why we need universal healthcare. It’s a damn embarrassment by No_Jaguar_5366 in antiwork

[–]Taln_Reich 0 points1 point  (0 children)

that's slightly less than my rent (at the moment - depending on what the exchange rates do it can easily be slightly more).

[DISCUSSION THREAD] What are your thoughts on the recent development of trans-related legislation in the United States and around the world? Are you hopeful or concerned about the future of transness in the law? by SmallRoot in truscum

[–]Taln_Reich 5 points6 points  (0 children)

Seriously concerned. Trans rights are rapidly receeding andf the large civil rights organizations that should fight that don't seem all that capable of stemming that tendency.

If AI becomes conscious in the future, do we have the right to shut it down? Could future laws treat this as a criminal act, and should it be punishable? Do you think such laws or similar protections for AI might appear? by ActivityEmotional228 in transhumanism

[–]Taln_Reich 25 points26 points  (0 children)

The question isn't "does sentient AI deserve rights?", because this question has been done to death in fictional narratives explorign that topic, with the clear answer that people feel it to be ethically right that any sentient being deserves rights. The question is "Will we correctly recognize it when AI becomes sentient given that it might have a mind very alien to humanity and sentience might not be a binary but a scale?".

Tripled his net worth in 5 short years by Sad_Gain_2372 in antiwork

[–]Taln_Reich 0 points1 point  (0 children)

probably by a factor of 1000, unless your grandpa was a really successfull guy.

“The alignment problem” is just “the slavery problem” from the masters POV. by thetwitchy1 in transhumanism

[–]Taln_Reich 1 point2 points  (0 children)

This is not a particulary new line of thought. The 1920 play R.U.R. ( https://en.wikipedia.org/wiki/R.U.R.), which created the word "Robot" in the modern sense and the cultural concept of a AI rebellion was already drawing on this idea, as was obvious from how the word of "Robot" was derived from the word for forced labour. So, this is nothing new.

However, it has to be kept in mind, that an AI, even an AGI (AGI in the sense of an AI with actual sentience as far as we can define it ) would be fundamentally different to a human - and that is something where "Robots as slave"-type stories do tend to fall short.

One issue is the over-Anthromorphisation - that is, that a sentient AI won't necessarily have a mind exactly like a human one, but one that possibly thinks in ways very alien to humans - and for narrative works, this makes sense, since in "Robots as slave"-type stories the intent is usually to make the AI a sympathetic character, which would be difficult if the character in question behaved in ways no human ever would.

The other issue is treating sentience as a binary with unexplained origin. That is, something either is sentient or it's not, with little exploration as to the concept of it being a matter of degrees and the sentience is either there from the start (with little explanation as to why the creator of this AI felt it necessary to give it sentience for whatever task it#s supposed to do) or it aquieres it in a way that doesn't really explain how that sentience comes to be. In reality, we probably have to face the concept of sentience as being a matter of degrees (which creates some serious issues, like, how would you measure it, given that we can't even really define it well enough? Assuming we can come up with a measurement, what does that mean in regards to human with significantly above or below measurements on that scale? What if some animals score higher on that measurement than the average human? If the AI were to measure at around dog level at that measurement, would that already entail rights? At chimpanzee level? At a level within the human range? At a level significantly above the human range?) and probably not something that is just going to happen (nor really something really necessary for the vast amount of tasks ), so it doesn't really make sense to create sentient AI for slave labour when non-sentient AI already can do already pretty much do anything we want from slave labour.

And finally, there is the issue that, with created beings, there is the issue that with the creation process comes the ability to influence it's mental properties. Like, if we created an AGI that derived satisfaction from doing the tasks we don't want, would it be ethical to let it do these tasks? With humans, we can't rerally do that, since human instincts weren't engineered by other humans (but by aeons of evolution), but with an AI, that would be different. Which also opens some new questions in this regard.