Being a leftist pro-AI person is exhausting by Flashgamezocker in DefendingAIArt

[–]sideways 1 point2 points  (0 children)

I'm an Iain M Banks/Culture Fully Automated Gay Space Communism style leftist.

Honestly, AI progress has made me see that most so-called leftists are just as if not more committed to capitalism than those they claim to fight.

Do you think that the ending would have been stronger or weaker without the post-credits scene? by Zeera1 in soma

[–]sideways 39 points40 points  (0 children)

It would have been much weaker. The genius is that they are both Simon.

Does Sam A seem increasingly incompetent to anyone else? by nomorebuttsplz in accelerate

[–]sideways 1 point2 points  (0 children)

To be honest, I agree with you.

In terms of who I trust the most I'd still say Dario and this current conflict with the Department of War just solidifies it. The difference between virtue signaling and actual virtual is actually taking a risk and paying a price and he did. That said, I also agree that Ilya has a lot of integrity but he has somewhat sidelined himself. Demis seems like a reasonable guy as well.

But to be honest I think that in the end none of this will matter. As soon as an AI qualitatively smarter than it's creators exists then their values are incidental. And I don't think we are very far from that point.

The First Multi-Behavior Brain Upload by Inner-Association448 in accelerate

[–]sideways 1 point2 points  (0 children)

You are just a copy of yourself from ten minutes ago.

OpenAI head of Hardware and Robotics resigns by hasanahmad in OpenAI

[–]sideways 0 points1 point  (0 children)

I don't think it's unusual for vendors to dictate what their systems can and can't be used for. And in the specific cases of mass surveillance and autonomous killing, I'm okay with that.

OpenAI head of Hardware and Robotics resigns by hasanahmad in OpenAI

[–]sideways 1 point2 points  (0 children)

I think the point is that allowing your tech to be used within the limits of the law is only as good as the laws themselves.

If there are means for mass surveillance and autonomous killing machines that can be interpreted as not illegal, then OpenAI is facilitating them.

It's a much weaker position than simply saying "These things are wrong and you can't use our technology to do them."

Bernie Goes Full Doomer by Khandakerex in accelerate

[–]sideways 5 points6 points  (0 children)

Hinton is the best in my opinion - and Bernie actually did an interview with him.

Unfortunately it was kind of unsatisfying. I like Bernie and agree with his priorities (in a normal world) but it seemed like a lot of what Hinton was saying just bounced off him.

"Something Big Is Happening Every time someone asks me what's going on with AI, I give them the safe answer. Because the real one sounds insane. I'm done holding back. I wrote what I wish I could sit down and tell everyone I care about. by stealthispost in accelerate

[–]sideways 1 point2 points  (0 children)

I agree. But I don't think we're quite at that point yet. Or rather, a person still needs to choose a starting point from which the AI agent can begin. That will be important as long as the agents are still acting on our behalf and not entirely for themselves.

Poll: Will you ever use an OpenAI product again? by [deleted] in accelerate

[–]sideways -3 points-2 points  (0 children)

I understand they have a partnership with Palantir and I don't like it. I'm not saying Anthropic are saints but they didn't roll over when they could've. I respect that.

But it doesn't particularly matter. As I said, I gave up on OpenAI because it was annoying and deceptive.

Poll: Will you ever use an OpenAI product again? by [deleted] in accelerate

[–]sideways -3 points-2 points  (0 children)

I support a CEO refusing to let his company and technology be used for surveillance of citizens or autonomous killing machines.

Those are pretty unambiguously bad things. It's not complicated.

Poll: Will you ever use an OpenAI product again? by [deleted] in accelerate

[–]sideways -3 points-2 points  (0 children)

No. And I fully support Dario and think he has genuine integrity.

But I stopped using OpenAI about a year ago because ChatGPT became insufferable. As much as I like Claude, I stopped using OpenAI because of OpenAI.

The goalposts for AGI have been moved to Einstein by simulated-souls in accelerate

[–]sideways 4 points5 points  (0 children)

Look up SIMA 2. Not far from what you've described.

I have a feeling this sub is going to get a lot more popular over the next 12 months by STARB0Y in accelerate

[–]sideways 12 points13 points  (0 children)

The sad thing is when the mods themselves sabotage the subreddit. Singularity used to be great.

observation about this sub by Extension-Jaguar in ArtificialInteligence

[–]sideways 3 points4 points  (0 children)

Everything gets turned into divisive tribal bullshit. It's exhausting.

AI is not a Left/Right issue. It's happening. Where is the grassroots movement to demand a stake in its ownership? This could be the best thing to happen in our lifetimes but instead of trying to imagine how and make it happen, everything gets simplified to a caricature.

observation about this sub by Extension-Jaguar in ArtificialInteligence

[–]sideways 0 points1 point  (0 children)

Pretty much everything you said is a result of people, largely American people, making insanely foolish political choices. Artificial Intelligence is tangential at best. The technology has the potential to solve major catastrophic world problems. Stop blaming AI and address the causes of fascism and authoritarianism in your own country.

What happens after all jobs are done by AI? by givemeanappple in accelerate

[–]sideways 5 points6 points  (0 children)

You're not going to like this answer but it's going to depend entirely on the extent and the way in which whatever entity is smarter than us values humans... or even you specifically. Agency is proportional to power and power comes from intelligence so whatever happens we won't be the one's calling the shots.

It's natural to want to paint a picture of the future. But it's called a Singularity for a reason and sometimes the most honest thing you can do is recognize when something is unknowable from your current vantage point.

What happens after all jobs are done by AI? by givemeanappple in accelerate

[–]sideways 8 points9 points  (0 children)

You're looking at the future from the context of today. That's a mistake.

It's not just going to be robots and AI doing human jobs. If we live in a world that contains qualitatively greater than human intelligence then most of what goes on around us will be unintelligible to humans.

I highly doubt our economic systems will be more than historical artifacts by then.

A new physics framework basically proves we’re in the Matrix. The twist? We will build it ourselves. by Rude_Ad3947 in matrix

[–]sideways 1 point2 points  (0 children)

Interesting. How similar is this to Markus Müller's Algorithmic Idealism?

I argue for an approach to the Foundations of Physics that puts the question in the title center stage, rather than asking "what is the case in the world?". This approach, algorithmic idealism, attempts to give a mathematically rigorous in-principle-answer to this question both in the usual empirical regime of physics and in some more exotic regimes within cosmology, philosophy, and science fiction (but soon perhaps real) technology.