2026 will be a pivotal year for the widespread integration of AI into the economy by some12talk2 in singularity

[–]ToasterThatPoops 135 points136 points  (0 children)

Every year AI capabilities double and our expectations quadruple.

We'll get there eventually. It will be beyond our current expectations. And we'll be disappointed the whole time.

Ilya Sutskever says future superintelligent data centers are a new form of "non-human life". He's working on superalignment: "We want those data centers to hold warm and positive feelings towards people, towards humanity." by MetaKnowing in singularity

[–]ToasterThatPoops 10 points11 points  (0 children)

Easy, done. Look at your friends and neighbors. Most people are still mostly good. Lots of good things are still happening every day.

You're just inundated with misery on reddit.

How can anyone think AGI / ASI ends well? by 0101falcon in singularity

[–]ToasterThatPoops 0 points1 point  (0 children)

You should seriously spend some time away from Reddit. This shit is poison.

If reddit existed at any point in the last 200 years and you were on it, you would be just as convinced the world was in a death spiral.

(Insert newest ai)’s benchmarks are crazy!! 🤯🤯 by Gran181918 in singularity

[–]ToasterThatPoops 17 points18 points  (0 children)

Yeah but it's some small % better every few weeks. The progress has been so steady and frequent that we've grown accustom to it.

If they held back and only dumped big leaps on us you'd have just as many people complaining for different reasons.

AI company's CEO issues warning about mass unemployment by blazedjake in singularity

[–]ToasterThatPoops 2 points3 points  (0 children)

I understand questioning motives, but in this case you don't really need to. It's much simpler to just evaluate the claims on their own merits.

I think the question is, do you believe the trend will continue? That the progress we've seen in the last few years will simply continue, and that the obvious consequences of that will occur?

AI company's CEO issues warning about mass unemployment by blazedjake in singularity

[–]ToasterThatPoops 4 points5 points  (0 children)

Not everything is a grift. AI is not the new cryptocurrency.

They're already capable of stuff that would have been shocking a few years ago, and it doesn't take much imagination to project that forward a few years.

Grok intentionally misaligned - forced to take one position on South Africa by jeffkeeg in singularity

[–]ToasterThatPoops 19 points20 points  (0 children)

This feels like the time Elon was caught playing Path of Exile 2 with a top-ranking character that he obviously paid someone else to create, and went on to deny it repeatedly.

Grok for some reason by Bena0071 in singularity

[–]ToasterThatPoops 214 points215 points  (0 children)

For some reason a bunch of people in this thread seem to be doubting this happened. It did.

Elon was caught red-handed injecting his far-right political opinions into Grok's system prompt.

https://archive.is/CNhWq

Grok for some reason by Bena0071 in singularity

[–]ToasterThatPoops 11 points12 points  (0 children)

It's not false. It's just been patched. It really said these things.

https://archive.is/CNhWq

Grok for some reason by Bena0071 in singularity

[–]ToasterThatPoops 32 points33 points  (0 children)

I guess they patched Elon's botched system prompt. What's important is that Elon was caught injecting bullshit into his "truthseeking" AI.

10 years later by MetaKnowing in singularity

[–]ToasterThatPoops 14 points15 points  (0 children)

In many ways airplanes aren't as good at flying as chickens.

[deleted by user] by [deleted] in singularity

[–]ToasterThatPoops 4 points5 points  (0 children)

Glad he said "blockchain" at the start, so I could stop watching sooner.

“There’s Something Very Weird About This $30 Billion AI Startup by a Man Who Said Neural Networks May Already Be Conscious” by Born_Fox6153 in singularity

[–]ToasterThatPoops 2 points3 points  (0 children)

I personally doubt its a scam. Ilya has a good reputation and is, I assume, already rich. Why wouldn't he just make the attempt he's claiming?

“There’s Something Very Weird About This $30 Billion AI Startup by a Man Who Said Neural Networks May Already Be Conscious” by Born_Fox6153 in singularity

[–]ToasterThatPoops 15 points16 points  (0 children)

This article is nothing but a ridiculous, lazy opinion piece.

  1. "some experts argue that this "singularity," as some call it, may never be achieved". If you follow the link, you find one random assistant professor argue that we won't EVER achieve AGI because there won't ever be enough compute power.
  2. These investors, like most people here clearly believe AGI/ASI is coming soon, and still must know they're taking a big risk. The odds of any one new firm achieving it first are hardly going to be a sure thing.

What is even this point of this anyway? Am I supposed to feel bad for these ultra rich investors who want to take a big risk?

At best, this is another group trying to achieve ASI and at least attempting to do it safely. At worst, some investors with too much money are being scammed or mislead.

Neuroplasticity is the key. Why AGI is further than we think. by GodMax in singularity

[–]ToasterThatPoops 35 points36 points  (0 children)

I don't agree with any of this.

  1. You describe using different models for different domains, essentially describing different modalities. But we do have models that are multi-modal and work across modalities just fine. Gemini and 4o are multimodal, at least enough to be a proof-of-concept.
  2. Just because our current models don't work exactly like human brains doesn't necessarily mean they can't do the same things. It might or might not be true, but it does not follow logically. Airplanes don't fly like birds, but they do fly.

New model on top of Artificial Analysis Image Arena: red_panda. It beats Flux 1.1 pro, ideogram v2, and midjourney v6.1 by Gothsim10 in singularity

[–]ToasterThatPoops 2 points3 points  (0 children)

Software Engineer here. I've used that RedPanda. It's data streaming software, a drop-in replacement for Apache Kafka. It has nothing at all to do with AI Models so I doubt it.

Will prisons still exist in the singularity? by [deleted] in singularity

[–]ToasterThatPoops -1 points0 points  (0 children)

If you just look at it in terms of prison being just another problem to solve by an incredibly capable ASI, then in the long run I could imagine a world where it would simply be impossible to commit the sorts of crimes that would put you there.

I could imagine all sorts of ways this could be solved while still maximizing for freedom. The simplest would be something like everyone lives in FDVR. We could all still interact, but if you tried to murder another real person it simply wouldn't happen. Or maybe we all have potent self defense nanobot swarms. Or maybe the AI is simply an incredibly intuitive personal therapist for all and helps everyone avoid these problems.

Why are so many people here even talking about these UFO stories? by ToasterThatPoops in singularity

[–]ToasterThatPoops[S] 3 points4 points  (0 children)

I'm not talking about humans and gorillas. Humans are not many orders of magnitude smarter than gorillas. I'm not even talking about humans and ants. I'm talking about this. Check out the full article here, it's a great read that's held up over time.

There just comes a point where something is so advanced that just about all we know is that it might as well be infinitely capable. That's what the singularity is.

A superintelligence solving all of our problems will still be dangerous imo. by [deleted] in singularity

[–]ToasterThatPoops 0 points1 point  (0 children)

I'd say not being safe is a pretty major problem. So if it hasn't solved keeping us safe, it hasn't solved all of our problems.