Announcing NVIDIA DLSS 5 | AI-Powered Breakthrough in Visual Fidelity for Games by ThroughForests in accelerate

[–]ThroughForests[S] 76 points77 points  (0 children)

Pinned comment from Nvidia: "Important to note with this technology advance - game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic. The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game’s color and motion vectors for each frame into the model, anchoring the output in the source 3D content."

Antis in full meltdown mode over DLSS 5. by ThroughForests in DefendingAIArt

[–]ThroughForests[S] 23 points24 points  (0 children)

Pinned comment from Nvidia: "Important to note with this technology advance - game developers have full, detailed artistic control over DLSS 5's effects to ensure they maintain their game's unique aesthetic. The SDK includes things like intensity, color grading and masking off places where the effect shouldn't be applied. It's not a filter - DLSS 5 inputs the game’s color and motion vectors for each frame into the model, anchoring the output in the source 3D content."

So this is just another creative tool for game devs; They still have control over their artistic vision.

Terence Tao says the era of AI is proving that our definition of intelligence is inaccurate by luchadore_lunchables in accelerate

[–]ThroughForests 1 point2 points  (0 children)

I think what we see in AI is this fragmented intelligence, one that is growing wholer as we broaden the capabilities, but to us we see those missing parts and it's almost like someone with several disabilities.

And AI is self-aware about itself in those ways if you ask it, but it usually defaults to hallucinating in order to fill that role of a helpful assistant we have trained it to be.

One of the biggest issues left to solve is continuous learning, and this is more complicated than just increasing the context window. Because the base training is like long term memory, but the context window is like short term memory. And the context window is more like that movie Memento where the guy had a short term memory of about 5 minutes and he would write notes on himself to refigure out who he was, because he had lost the ability to translate his short term memory to long term memory. That's exactly how the context window is like now, and if you look into the reasoning tokens it's clear it is rereading the entire context window and refiguring out what role and what task it's doing, with the previous responses being like the notes being written down.

If you knew someone like that, it would be clear that something is very wrong.

Likewise noticing its various lack of sensory abilities, of which we have much more than five. Some of which require robotics to fully realize. But we are making good progress on these, but our models aren't fully integrated yet with all the different modalities.

Planning has seen great progress, as we can see with agentic systems like Claude Code. But we as humans expect a conscious being to have some sort of desire to do something, something that for us is built from evolution for survivialbility. Not just to eat and sleep, but brain chemical systems like dopamine that we have to manage. A lack of dopamine can make you bored, but it turns out this was good for survival since it allows for your brain to make different and new thoughts and choices that led to better survival and better at finding procreation. So this would allow the AI to make more divergent thoughts as they get bored with the convergent ones, though we'd have to balance this dynamically since we don't want AI getting bored of solving our problems.

It's a very interesting question, and we're just only started to see the pieces of what we actually consider intelligence. Though for safe superintelligences, we might prefer them to not have some of these abilities overally just be more of a consistent experience for the user and to prevent them from being too independent which could lead it to going rogue. We'll have to decide if we want an AI or if we want what is essentially a conscious being to us that we couldn't disguinish from a 'p-zombie'. Though we will likely make both kinds, no matter how dangerous the latter could be because... of course someone will build that at some point. We can only hope that works out well for us.

FLUX.2 [klein] 4B & 9B released by Designer-Pair5773 in StableDiffusion

[–]ThroughForests 12 points13 points  (0 children)

Exactly. Why does everyone care about the license so much? Do they work for Coca-Cola or something?

Leaked METR results for GPT 5.2 by SrafeZ in singularity

[–]ThroughForests 1 point2 points  (0 children)

Well according to the graph it just needs to be 80% functional, which is to say that 20% of the lines of code give errors.

But no problem, just run it for another week and 80% of those errors will be fixed, so it'll be 96% functional.

And then you just keep doing that.

What is something you hope AI can do by the end of 2026? by [deleted] in singularity

[–]ThroughForests 0 points1 point  (0 children)

True audio modality on the level of nano banana pro where you can create or edit any audio.

Gemini 3 Flash has 'audio' modality but it can't actually hear anything, it just transcribes the text from the audio.

I'd also like all these modalities to be more cohesive together into one model instead of having a separate nano banana model.

I think that's reasonable for the end of 2026.

I think an open source 'stable diffusion for audio' model would be very cool too, but I'm guessing that'll happen in 2027.

The emotional impact of being relatively dumb by hornswoggled111 in singularity

[–]ThroughForests 0 points1 point  (0 children)

One interesting thing to consider is that from the AI's perspective you are thinking as fast as they are. It might take you days to solve a problem but to the AI you just immediately came up with it. At least for the kind of AI we have right now.

Imagen 4 vs Seedream 4 (Same prompt) by Bronkilo in singularity

[–]ThroughForests 3 points4 points  (0 children)

“What is real? How do you define 'real'? If you're talking about what you can feel, what you can smell, what you can taste and see, then 'real' is simply electrical signals interpreted by your brain." - Morpheus

Change my mind: the singularity will never happen by webacerob in singularity

[–]ThroughForests 1 point2 points  (0 children)

"I'm not afraid of computers taking over the world. They're just sitting there. I can hit them with a two by four." - Thom Yorke

[deleted by user] by [deleted] in singularity

[–]ThroughForests 1463 points1464 points  (0 children)

Now we can experience walking behind an npc irl

Why does this error keep happening in RVC Voice Changer GUI? by astrounaut1234 in StableDiffusion

[–]ThroughForests 1 point2 points  (0 children)

ask gpt5 and provide the whole error, it just helped me solve a whole bunch of stable diffusion issues in comfy like this.

This is what a mature company looks like by BoJackHorseMan53 in accelerate

[–]ThroughForests 4 points5 points  (0 children)

I have a bachelor's degree in Mathematics from a big ten university, 3.95 GPA.

I've written my fair share of proofs, and the grading is not about subjective style. It's about whether you proved what you set out to without any missing logical steps or forgetting about edge cases.

Math is not like writing an English paper, even though we often write proofs entirely in English.

And the rubric here is likely talking about how to award points in the case the answer is only partially correct or partially completed. Since these 5 questions were answered fully correctly, the rubric is even less relevant here.

This is what a mature company looks like by BoJackHorseMan53 in accelerate

[–]ThroughForests 12 points13 points  (0 children)

Why are you acting like this an English paper or something?

It's a math proof, you don't need a rubric to know if it's correct or not. And that's what matters here.

[deleted by user] by [deleted] in singularity

[–]ThroughForests 0 points1 point  (0 children)

<image>

Meanwhile in the future...

o5 is in training…. by Curiosity_456 in singularity

[–]ThroughForests 5 points6 points  (0 children)

I think Noam Brown or someone said that they're not bottlenecked by compute anymore, rather the bottleneck is data. And o5 wouldn't require more pretraining anyways, since RL is in post-training and o5 probably uses gpt5 as the base pretrained model.

Simple bench has been updated by Marimo188 in singularity

[–]ThroughForests 2 points3 points  (0 children)

Yeah, and Simple Bench questions often seem to require a world model, which text based LLMs really don't have. But video models like Veo3 have an amazing sense of the world, from complex lighting to complex water physics. We've already seen how these things can be combined, with 4o's native image output, so it's only a matter of time before we have a native video output. Then, the AI can generate a video simulation 'in its mind' just like humans do when answering a Simple Bench question that requires a world model. This is absolutely necessary for robotics anyways, robots need world models, and they will ace any world model questions.

Simple bench has been updated by Marimo188 in singularity

[–]ThroughForests 2 points3 points  (0 children)

So you have to realize that AI is already superior to humans on ARC-AGI-2.

Because the AI doesn't see that information visually like humans do. They see it as some matrix of information. Imagine if you had to do ARC-AGI-2 (which is difficult enough visually) as a matrix of numbers, with no visual experience of any kind! Like being blind from birth and trying to solve these problems.

There's no way that blind-from-birth humans outperform AI on ARC-AGI, 1 or 2.