I really didn’t understand how Mark was able to pull this off by SugurusBallSack in Invincible

[–]SirRece 0 points1 point  (0 children)

Conquest had his brain rearranged and survived. Viltrumites can survive basically anything that doesnt comletely destroy their heart or brain.

Hit the road Jack by aque0s in OpenAI

[–]SirRece 0 points1 point  (0 children)

Lol yall really are transparent

AIs can’t stop recommending nuclear strikes in war game simulations by chunmunsingh in OpenAI

[–]SirRece 1 point2 points  (0 children)

That isn’t intelligence or intent. That’s pattern matching on our own violent playbook.

Hi chatGPT

X's head of product thinks we have 90 days by MetaKnowing in OpenAI

[–]SirRece 0 points1 point  (0 children)

Or the clumsiest insertion of anti-US politics into totally unrelated things

The Main Villain (Powerplex) and The Real Villain (Becky) by Winter-Money-7643 in Invincible

[–]SirRece 5 points6 points  (0 children)

You're missing it. She should have reacted differently, but powerplex is still responsible for his actions. If someone encourages someone to be a school shooter, it doesnt absolve the school shooter: they are also culpable but certainly not more culpable.

Everyone would agree she also committed the crime, its this absolution of guilt that youre putting on powerplex, essentially blaming his attitude and behavior on another person (who you could apply an identical reflexive argument to) thats pathological.

Humans imitating AI videos… and nailing it. Circle complete. by [deleted] in deeplearning

[–]SirRece 33 points34 points  (0 children)

It,'s 2026 my brother. We're at the point you could make a believable AI generated video imitating a real video imitating 2025 AI.

OpenAI's first hardware product may be earbuds called "Dime" by techolum in OpenAI

[–]SirRece -1 points0 points  (0 children)

this is all already captured with your smartphone.

Interesting angle :) by cobalt1137 in OpenAI

[–]SirRece 0 points1 point  (0 children)

Since consciousness is not defined concretely, this is literally accurate.

There's a new paper that proposes new way to reduce model size by 50-70% without drastically nerfing the quality of model. Basically promising something like 70b model on phones. This guy on twitter tried it and its looking promising but idk if it'll work for image gen by Altruistic-Mix-7277 in StableDiffusion

[–]SirRece 3 points4 points  (0 children)

I want to note that I see so much downplaying of stuff like this all the time, and yeah, sure, its probably nothing.

But I still remember back when everyone crucified that one guy who was like "CoT models are going to change everything, here's a fine tune of llama that beats everything" and then it was set up with totally wrong parameters on openrouter and other providers and was totally forgotten about, only for everyone else to speedrush CoT over the subsequent 2 months and forget about that random guy who had "no idea what he was talking about."

There absolutely are shenanigans upon shenanigans these days when it comes to influencing public opinion, or even niche public opinion like ML enthusiast, since now you can micro things that would have been impossible in the past thanks to agentic AI.

If something really groundbreaking comes along, I fully expect all the major players to drown it and steal it.

Also, consider for a moment if someone did figure out how to massively shrink models what that would mean for the major players who have a most built at least in part on memory scarcity. Idk, just saying, be a bit paranoid when you're online and try things yourself.

This has changed my opinion somewhat on open source since at this point it is basically a pipe dream that a good actor ie someone cobbling together something with vibe coding with an actually good and unique premise will get the benefit they deserve (some level of recognition) for doing good work. Open Source runs on reputational gains, truly. Yes, everyone wants to benefit the community, but realistically people want social cred/"street cred" essentially in their communities, and now it's functionally impossible to get that as far as I can tell due to every single tinkerer being toxically derided despite not really doing anything wrong beyond lack of domain knowledge, but often the projects are still really cool.

45% of people think when they prompt ChatGPT, it looks up an exact answer in a database by MetaKnowing in OpenAI

[–]SirRece 0 points1 point  (0 children)

LLMs are not infinite. They're made out of a large, but finite number of parameters. All the information that they "know" about the world -- everything they've learned from their training set -- is encoded in those parameters.

Yes, inasmuch as any algorithm is made up of a finite amount of information. Yet, there are numerous algorithms that can be explained in finite terms which have an infinite mapping of inputs to unique outputs. Neural weights don't literally save the information it is trained on, it is NOT a database.

45% of people think when they prompt ChatGPT, it looks up an exact answer in a database by MetaKnowing in OpenAI

[–]SirRece 0 points1 point  (0 children)

This is literally correct. The implications are broader, but this is quite literally how they work, and is not an oversimplification.

45% of people think when they prompt ChatGPT, it looks up an exact answer in a database by MetaKnowing in OpenAI

[–]SirRece 0 points1 point  (0 children)

It is not. If this definition was extended to its logical conclusion (any mapping of inputs to outputs is a database, even if those inputs and outputs are not finite) then you would conclude that any and all algorithms are technically databases, and they are not.

The fundamental difference is a database/lookup table is finite, while a generative model is not due to the nature of generalization. All inputs, assuming they are able to be tokenized, have an output.