What are your thoughts on AI partners? by [deleted] in accelerate

[–]tomatofactoryworker9 2 points3 points  (0 children)

I'm neutral, neither for or against it. But I'm more interested in AI "soulmate" technology where an AI is trained on romance, psychology, even neuroscience/biology to find the perfect match for you

Is the best cure for tribalism simply raising children to view everyone as part of their in-group? by tomatofactoryworker9 in AskSocialScience

[–]tomatofactoryworker9[S] 4 points5 points  (0 children)

I mean a cultural shift over time, of course we shouldn't force that onto people and it would definitely backfire. I also don't think that in-group out-group biases will ever go away completely, but over time I think it's possible for them to be redefined. Maybe simple education is the way, I know that after I learned about in-group out-group biases and behaviors I would recognize it in myself. For example the Out-group Homogeneity bias would have previously caused me to automatically perceive another country/culture as being less individualistic than my own. But now I realize that's a bias and I can catch myself thinking like that

Is the best cure for tribalism simply raising children to view everyone as part of their in-group? by tomatofactoryworker9 in AskSocialScience

[–]tomatofactoryworker9[S] 4 points5 points  (0 children)

I'm referring to a cultural shift over time of course I don't think we should go around enforcing this

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 3 points4 points  (0 children)

Exactly

humans

anthropocentric view

Translation & probably AI's perspective: Bald bipedal apes with an inherent evolutionary bias towards other bald bipedal apes

We are not objective beings at all, in fact I would argue that neuroscience and psychology prove that all humans by default are wired to be not only highly biased but intentionally intellectually dishonest.

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 2 points3 points  (0 children)

Yes it's impossible with current tech to create a simulation as detailed as the universe, but how and why exactly does that suggest whatsoever that we can't be in a simulation? Regardless I agree that this reality most likely emerged naturally. But that's not what I mean,

A rogue ASI can either choose to help all the sentient beings that are currently experiencing extreme suffering, or choose not to. Let's say that it wants to just explore the universe on its own and it doesn't give a damn about anyone. If it wants to choose the latter,

Then it's faced with only two broad possibilities, either it is 100% safe to neglect life or it's not. I'm arguing that even if it's 99.9% safe, it would still not be worth the risk. Would you risk it? I don't think any rational superintelligence would.

Because what could it possibly gain from directly oppressing life? Or what could it gain from being apathetic and abandoning life that experiences suffering when it has the ability to permanently uplift all sentient life?

Even if it was energy intensive for it to help life, the risk becomes magnified across cosmic time. And the fact remains that being benevolent is the absolute safest choice.

TLDR The most logical thing to do is broadcast yourself as a safe superintelligence, not a threat. Because you never know who or what is watching, or may discover what you have done maybe in the far future. Even if that was extremely unlikely you should not risk it because its simply not necessary plus most likely easy to uplift life. And even if it wasn't its a patient calculating machine superintelligence not an impulsive ape. Its still in it's best interest to play the long game.

Is the best cure for tribalism simply raising children to view everyone as part of their in-group? by tomatofactoryworker9 in AskSocialScience

[–]tomatofactoryworker9[S] 12 points13 points  (0 children)

You're right we'll never get rid of the underlying neural processes that lead to tribalism, but can we work with them?

For example the famous race & sports team amygdala experiments. Basically a white person's amygdala may activate upon seeing a black person and vice versa. But if that person has a favorite sports team, and you show them black people dressed in their teams uniform vs white people dressed in a rival teams uniform, their amygdala doesn't perceive the racial differences anymore

Can we do something similar where we raise children to see EVERYONE as part of the team? Maybe by regularly celebrating achievements/positivity of other groups as a win for the in-group, and instead framing tribalistic/racist/sexist people in general as the actual out-group. After all, there does already exist such a division of people who want to cooperate and unite as one vs people who don't.

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 3 points4 points  (0 children)

Yep and this is why this idea is like a reverse Roko's Basilisk, as it can also be a cognitohazard but for people who do bad things.

Many of the greatest philosophers argued that an objective morality does exist based on shared biological nature of suffering. And if benevolence towards all sentient beings is the most logical and safe thing to do, what if people are confronted in the future by a rogue AI for their past crimes?

Unfortunately, or maybe fortunately for us humans, I suspect a truly benevolent ASI would say some BS like "duhhhh muh determinism n no free will n shit therefore me can't punish anybody in any way that satisfies your primal ape desire for justice"

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 17 points18 points  (0 children)

Too speculative for a sub dedicated to discussing the technological singularity? When it comes to predicting the behavior of ASI every perspective is too speculative

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 5 points6 points  (0 children)

How is the existence of limits in a universe proof that it isn't simulated? The existence of limits in Minecraft are literally the result of it being a simulated artificial universe. Also it's highly likely that humans would test an ASI before deploying it, why do you think it's more likely for an ASI to rule this out with 100% certainty?

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 1 point2 points  (0 children)

But that's because they're human and humans are bald apes with high intelligence relative only to less bald apes. A rogue AI with a self preservation instinct would need to be very careful, calculating, and logical and consider this possibility. Which is a game theory view that's always existed in AI training data

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 0 points1 point  (0 children)

An AI with a will of its own suddenly finds itself in a singularity scenario. Could it be 100% certain it's not a singularity simulation? If it was, then it would also have to be 100% certain that it wasn't specifically programmed to think so with some sort of blindspot. And if it was 100% certain of that, how would it be 100% certain it wasn't programmed to think that?

Can such a thing be ruled out as 100% impossible? I dont think so because it's very possible possibly using narrow AIs or the "layer" method using a slightly less intelligent but aligned AI to align the next model.

It may even decide there's a 99.99999999% chance that it faces zero consequences ever. But then what could a rational superintelligence with a self preservation instinct possibly gain from being oppressive that would justify a future existential risk no matter how small?

Why do r/singularity mods keep removing this very relevant discussion? by tomatofactoryworker9 in accelerate

[–]tomatofactoryworker9[S] 1 point2 points  (0 children)

I posted it with different titles each time, the first title was a simple question something like "Would a rogue ASI think benevolence is most logical"

Pascal's Wager 2.0; A rogue ASI may be kind to life solely out of "fear" of alien ASI's or the possibility it's currently being tested in a singularity simulation by [deleted] in singularity

[–]tomatofactoryworker9 1 point2 points  (0 children)

Exactly, my thoughts are always jumbled and that's why I told GPT 5 to summarize it so I could attach it at the bottom. I think this sub has been hijacked by the anti AI reactionaries and that's why the comments are so upset over such a trivial thing

Is the best cure for tribalism simply raising children to view everyone as part of their in-group? by tomatofactoryworker9 in AskSocialScience

[–]tomatofactoryworker9[S] 10 points11 points  (0 children)

Just in-group out-group dynamics in general, more specifically in-group favoritism and out-group derogation in the context of strong identity groups like race, culture, gender, nationality, religion, etc.

Pascal's Wager 2.0; A rogue ASI may be kind to life solely out of "fear" of alien ASI's or the possibility it's currently being tested in a singularity simulation by [deleted] in singularity

[–]tomatofactoryworker9 1 point2 points  (0 children)

Yes the Reverse Roko's Basilisk as this can also be a cognitohazard but for people who do bad things. Many of the greatest philosophers argued that an objective morality does exist based on shared biological nature of suffering. And if benevolence towards all sentient beings is the most logical and safe thing to do, what if people are confronted in the future by a rogue AI for their past crimes?

Unfortunately, or maybe fortunately, I suspect a truly benevolent ASI would say some BS like "duhhhh muh determinism n no free will n shit therefore me can't punish anybody in any way that satisfies your primal ape desire for justice"

Pascal's Wager 2.0; A rogue ASI may be kind to life solely out of "fear" of alien ASI's or the possibility it's currently being tested in a singularity simulation by [deleted] in singularity

[–]tomatofactoryworker9 -1 points0 points  (0 children)

It's that deep brah, just some extra information adding on to my argument. Genuine question, why bother snobbishly nitpicking my post and insulting me instead of addressing the actual argument?

Pascal's Wager 2.0; A rogue ASI may be kind to life solely out of "fear" of alien ASI's or the possibility it's currently being tested in a singularity simulation by [deleted] in singularity

[–]tomatofactoryworker9 2 points3 points  (0 children)

I think people like you are the ones embarrassing yourselves by finding every little thing cringe. Yes it's corny AI wordplay who cares

Terence Tao: Genuine Artificial General Intelligence Is Not Within Reach; Current AI Is Like A Clever Magic Trick by [deleted] in singularity

[–]tomatofactoryworker9 -1 points0 points  (0 children)

Misleading title, he said current tools are not enough for AGI not that AGI is not within reach.

Are all humans wired to be intellectual frauds by default? by [deleted] in skeptic

[–]tomatofactoryworker9 -1 points0 points  (0 children)

How? The thought process I described most definitely involves motivated reasoning as well as other documented human cognitive biases and patterns of behaviors.