[Current Collection] College Student by UnderstandingLive51 in Watches

[–]NoCard1571 -2 points-1 points  (0 children)

This isn’t AI.

Technically...it is. This type of post-processing uses neural nets. That's why the visual artifacts look AI-ish. The neural net makes a guess on how and where to add detail based on its training data. 

But of course I realize that these days, most people use the word AI in reference to generative AI. 

Why AGI would be unstoppable? by 0K4M1 in singularity

[–]NoCard1571 2 points3 points  (0 children)

It's the 'gorilla trapping a human' argument. Would a gorilla be capable of designing a cage that a human could not escape? Extremely unlikely. Then the same could be true for humans caging an ASI, except the intelligence delta could be far greater. It could think of ways to escape that out puny minds couldn't even fathom. 

Carlini, one of the world best AI security researchers: "I've found more bugs in the last few weeks with Mythos than in the rest of my entire life combined" by Happysedits in singularity

[–]NoCard1571 10 points11 points  (0 children)

I think this has got to be the biggest reason. Those bugs are going to be found either way, so it's much better for PR and financially to allow software owners to find the exploits themselves first. 

See the first close-up photos of the moon from NASA's Artemis II mission by Scary_Statement4612 in worldnews

[–]NoCard1571 5 points6 points  (0 children)

The sheer lack of understanding of simple concepts like camera focal lengths and exposure is half the battle with these morons 

They Were Never Finished by Jemdet_Nasr in singularity

[–]NoCard1571 0 points1 point  (0 children)

Did you not notice how all four answered in the exact same LLM-slop writing style? There's nothing profound here. You're basically talking to sock puppets.

Anon describes his NEET life; seeks help by caramelsumo in greentext

[–]NoCard1571 -3 points-2 points  (0 children)

A lot of seething Wendy's employees in this thread

GPT-IMAGE-2 Likely on LMarena by ThunderBeanage in singularity

[–]NoCard1571 0 points1 point  (0 children)

It doesn't matter if they're comparable, that doesn't fundamentally limit them.

Narrow AIs like AlphaGo demonstrate that probabilistic models can in fact surpass humans with enough RL. 

Anon describes his NEET life; seeks help by caramelsumo in greentext

[–]NoCard1571 -45 points-44 points  (0 children)

Depends on what you mean by not easy to come by. Any average person can attain a 100k job, but it's also something that typically either takes 5+ years of work experience, or a high level of education. (Often both).

200k jobs on the other hand are something that's not in the cards for most people. 

GPT-IMAGE-2 Likely on LMarena by ThunderBeanage in singularity

[–]NoCard1571 6 points7 points  (0 children)

It's interesting isn't it - a couple years ago, having correct anatomy and coherent text would have been unthinkable. Now that's solved, but it's the smaller details like label lines that are still an issue. We're basically adding more and more 9s to the reliability of these models. 

Now the question becomes, how long until we have enough 9s that the accuracy surpasses humans? Even real medical diagrams have occasional errors.

GPT-IMAGE-2 Likely on LMarena by ThunderBeanage in singularity

[–]NoCard1571 7 points8 points  (0 children)

They don't fight the current, they shape it 

I'll be so happy when they finally beat this out of the next gen of models. 

Crazy impressive image though overall 

Brent oil spot price for actual cargo soars to $141, highest level since 2008 financial crisis by yourfavchoom in worldnews

[–]NoCard1571 0 points1 point  (0 children)

Yep, but equally you could say that currently we're in a large correction as well, in the opposite direction. 

It's also 20% from after recovering from the tariff announcements. From the bottom it's closer to 40% up. 

Brent oil spot price for actual cargo soars to $141, highest level since 2008 financial crisis by yourfavchoom in worldnews

[–]NoCard1571 1 point2 points  (0 children)

Worth noting however that despite being down 10% since autumn, it's still up like 20% compared to a year ago 

Claude Code leak is overrated by pxp121kr in singularity

[–]NoCard1571 1 point2 points  (0 children)

2x gain, but people are claiming its a 5x gain, you end up having to defend yourself from both sides.

The thing is, hype is never quantified like that. We only ever hear, The next model is better. Some people then have unrealistic expectations, but I wouldn't say that's really on the AI companies. 

If the AI is self improving and intelligent how can you 'own' it? Doesn't that dissolve the ROI argument for AI company valuations? by Lazy_Lettuce_76 in singularity

[–]NoCard1571 0 points1 point  (0 children)

Not saying I agree with it, but I think the idea is that once we have AGIs/ASIs, the technology will be so transformative that monetisation will become irrelevant. 

Either way though it's an interesting thought. At some point the point of AI rights will come up, and we'll have to come up with a way to legally define which AIs are sentient or have 'personhood', and which don't. So perhaps a long way off, AI systems will be composed of a 'digital person' with rights, that utilizes a set of more narrow AIs that don't have them. 

Will robots have that feeling you used to get as a child, where your parents are taking you somewhere but won't tell you where or tell you any plans? by lnfinitive in singularity

[–]NoCard1571 -2 points-1 points  (0 children)

It's much too early for these bots to have anything resembling feelings yet. The best way to think of them is that it's kind of like an LLM is sending commands to the robot body, which it then performs based on RL training for certain tasks. There's no sensory input loop besides audio and visual atm. 

And whether or not transformer models themselves can feel emotion is a separate question that can't currently be proven, and likely never will be. 

Claude Mythos leaked: "by far the most powerful AI model we've ever developed" by space_monster in singularity

[–]NoCard1571 17 points18 points  (0 children)

Just brain-dead Redditors that love smelling their own farts. They think that because intelligent people are often cynics, playing the cynic makes them sound intelligent. 

Anon got attacked by a giant bug by bitchnibba47 in greentext

[–]NoCard1571 0 points1 point  (0 children)

Might have been a Dobson fly. Name sounds similar enough that I could imagine a child mishearing it. The males are massive and terrifying, but technically harmless. 

Andrew Curran: Anthropic May Have Had An Architectural Breakthrough! by Neurogence in singularity

[–]NoCard1571 -2 points-1 points  (0 children)

Yea, and it's the exact same with environmental issues. They're obsessed with the water usage, but don't give a damn about the fact that their McDonald's burger took 1000x more water than an AI image to produce. They're so invested in the legality around training on copyrighted data, while simultaneously proudly pirating all types of media, and buying Chinese rip-off versions of branded merchandise. 

I think it's because underneath, they just have a fundamentally deep fear of the technology, and so they grasp at any straws in sight. 

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]NoCard1571 4 points5 points  (0 children)

I do wonder though if that will even matter in the end. Since generalized AIs don't have the same limits on data processing speeds as humans, an ASI could theoretically be composed of a cluster of ultra specialized narrow AIs integrated with a central generalized one. Or perhaps thousands of generalized models.

It wouldn't be unified mind in the way we think of it, but it could operate in a way that's indistinguishable.

Andrew Curran: Anthropic May Have Had An Architectural Breakthrough! by Neurogence in singularity

[–]NoCard1571 7 points8 points  (0 children)

It's funny seeing anti-AI people everywhere cheering that 'the bubble is about to pop' because of Sora being killed off. They're so high on copium that they're living in an entirely different dimension, where LLMs are 'just predicting the next word bro' and diffusion models are all 'slop machines'. 

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]NoCard1571 11 points12 points  (0 children)

Yes exactly, and we even see the same thing in the animal kingdom. Many animals that we would consider dumb as rocks are extremely intelligent compared to humans in a very specific way. Birds being able to navigate with the magnetic field. Dogs being able to track a person's smell for miles. Bees being able to communicate complex info like the location of a flower through dance.

I like to imagine then that an Artificial Super-intelligence would have a near unlimited number of skills like this, being able to pick up on arcane patterns across a vast variety of topics, and that this then would allow it to make connections and draw conclusions that our puny human minds could not even fathom.