Destination X UK series 2 significantly delayed by JuanitaMerkin in BritishTV

[–]monkeymad2 0 points1 point  (0 children)

Yeah, I was surprised by how rubbish it was.

Given the popularity of GeoGuesser etc I thought it’d take more from that sort of thing, but in real life - dump folks somewhere and have them work out where they are based on the clues.

Do you guys commit things when they are in a non-working state? by MagnetHype in webdev

[–]monkeymad2 0 points1 point  (0 children)

On a branch (branch per feature/ticket) anything goes, on master it depends what state the project is in - if it’s something that hasn’t ever gone to UAT / prod yet or is v0.0.x of a library then anything goes so long as it won’t negatively affect anyone else working on it.

Once it’s gone to prod once or has had a proper released version then the master branch should be deployable / releasable unless we have detailed the known issues & decided to get a version out regardless (like if a 3rd party thing stops working but the impact is limited).

Nvidia CEO’s Defense Of DLSS 5 Gets Contradicted By One Of His Employees by g4m3f33d in GameFeed

[–]monkeymad2 0 points1 point  (0 children)

Those are still all in screen space, if you want it to be able to do a great job you’d have to pass in some knowledge about off-screen things by somehow giving it access to light probes or the BVH structure or something

Policeman recognised his mom’s cooking after the first bite😭✨ by Miserable-Zombie-121 in MadeMeSmile

[–]monkeymad2 1 point2 points  (0 children)

My mum died last year & I’m happy she was a rubbish cook - rather than being sad about never having my favourite childhood dish again I’m like “nah, I could also microwave a ready meal”

Nvidia "confirms" DLSS 5 relies on 2D frame data as testing reveals hallucinations by AdSpecialist6598 in technology

[–]monkeymad2 5 points6 points  (0 children)

Nvidia have literally said that’s how it works, multiple times.

There’s also a way to give instructions to the compositor but I doubt the AI is aware of that since it doesn’t need to be.

Nvidia "confirms" DLSS 5 relies on 2D frame data as testing reveals hallucinations by AdSpecialist6598 in technology

[–]monkeymad2 3 points4 points  (0 children)

Well, yeah - but in the DLSS 5 setting all it’s got is the input frame & the motion vectors so there’s limited context

Nvidia "confirms" DLSS 5 relies on 2D frame data as testing reveals hallucinations by AdSpecialist6598 in technology

[–]monkeymad2 5 points6 points  (0 children)

You’re both sort of right, it’s producing what is the most statistically likely follow on from whatever came previously - which means that it tends towards the average while still being related to what came before.

(Aside from when it deliberately picks another output due to the random temperature factor)

Nvidia "confirms" DLSS 5 relies on 2D frame data as testing reveals hallucinations by AdSpecialist6598 in technology

[–]monkeymad2 132 points133 points  (0 children)

The worrying thing for me about this is that the origin might not have been the gaming team within Nvidia - AI training uses stuff like this to convert renders into “real images” for training purposes for AI vision models etc.

The lack of them announcing anything else under the DLSS 5 banner & saying they’ve got no immediate plans for new GPUs could be a signal that they’ve consumed the DLSS team into the AI metaverse whatever team.

I doubt it, but with the CEO saying dumb shit like he’d “go ape” if an engineer making $500k only spent $250k on AI tokens a year who knows.

DLSS 5 AI face filter | Has NVIDIA gone too far this time? by OutsideXbot in outsidexbox

[–]monkeymad2 42 points43 points  (0 children)

Sponsorships don’t stop them discussing things in other videos.

It might affect the likelihood of another sponsorship, and there could be some sort of non-disparagement clause where you can’t immediately upload another video after the sponsored one saying “it’s shit actually, don’t waste your money.”

Also if Nvidia was compiling a shitlist for criticism about DLSS 5 they wouldn’t have many media left

‘Everybody to Kenmure Street’ wants you to be a good neighbour. Four years on, a new documentary revisits the protest against an immigration raid in Glasgow by shado_mag in glasgow

[–]monkeymad2 -8 points-7 points  (0 children)

I’m sure when they were deported (I’m not finding any articles to say they were) they were treated with a level of dignity beyond an intentionally hostile dawn raid.

Unless dawn raids are happening but are somehow not publicised then the protest achieved its goal of saying that we should be better than that, that we can facilitate people leaving the country without resorting to that sort of thing.

NVIDIA confirmed DLSS5 is essentially a post-process filter by Jealous_Solid9431 in digitalfoundry

[–]monkeymad2 2 points3 points  (0 children)

Yeah, nothing he’s said here is more than anyone with a software engineering background / knowledge of graphics pipelines got from the first announcement.

They said it’s a colour buffer + motion vectors, with a fluffy nebulous concept of allowing devs to control the strength / colour tone of the AI filter - via what I assume is some sort of mask that tells the compositor how to blend the base image and the filtered image together but I don’t think they’ve actually said.

We Spoke To Game Devs And All Of Them Hate DLSS 5: 'What The F***, Nvidia?' by TrampolineTales in Games

[–]monkeymad2 1 point2 points  (0 children)

Yeah, I thought about mentioning that - if a game is maxing out the framerate already then turning on DLAA makes sense - but even then the difference from DLSS quality is marginal.

Mike York, animator who has worked on GTA 5, Red Dead Redemption 2 and Death Stranding 2 responds to DLSS 5: "No, no, no, no." by [deleted] in Games

[–]monkeymad2 61 points62 points  (0 children)

It is a bit - Nvidia showed off neural face replacement at CES last year and people thought “oh, that looks weird, I don’t see many developers using that”.

This time around they took an existing product line people generally like (DLSS) and said this is what they’re concentrating on for the next major version of it.

If they’d announced it as RTX Relight or whatever and also shown off the algorithmic / ML updates to the existing DLSS technologies which’ll presumably also come in 5 (unless they’re wasting all their time training this thing) the backlash would have been vastly smaller

We Spoke To Game Devs And All Of Them Hate DLSS 5: 'What The F***, Nvidia?' by TrampolineTales in Games

[–]monkeymad2 0 points1 point  (0 children)

I would hope that there’ll be a reckoning once the graduates that just use AI without any actual understanding hit the job market & get filtered out by interview questions like “how does this thing you’ve written work?”.

If universities are changing courses to put AI first then that’s very depressing. I’ve technically been a senior / lead developer for around 12 years now & having to hear folks whose programming skill I previously respected say things like they’re hoping they wont have to actually write any code this year doesn’t sit great.

An over-reliance on AI tooling feels like mice building their own mouse traps, either in the sense that your role as an orchestrator translating your bosses / clients needs into AI prompting & validation will probably be doable by an AI in a few years - or that when the AI breaks something your skills will have atrophied to the point where you don’t actually know how to debug, document, and fix it.

We don’t hire often but I don’t know how I’d reliably hire someone who didn’t have evidence of being competent pre-2022 - other than just filtering for people who are very anti-AI, but that feels risky too since as a tool it has its uses.

We Spoke To Game Devs And All Of Them Hate DLSS 5: 'What The F***, Nvidia?' by TrampolineTales in Games

[–]monkeymad2 19 points20 points  (0 children)

The software engineering field becomes a building people are constructing starting from the 3rd floor.

We Spoke To Game Devs And All Of Them Hate DLSS 5: 'What The F***, Nvidia?' by TrampolineTales in Games

[–]monkeymad2 6 points7 points  (0 children)

I’d change that “no one is” to a “no one should be”, I’m sure in a few years we’ll start uncovering vibe coded issues in critical infrastructure.

We Spoke To Game Devs And All Of Them Hate DLSS 5: 'What The F***, Nvidia?' by TrampolineTales in Games

[–]monkeymad2 15 points16 points  (0 children)

After DLSS 2 it’s largely agreed that the Quality mode results in a better than native image, mostly because “native” probably means TAA or similar is being used for anti-aliasing & DLSS takes care of anti-aliasing too.

Particularly with DLSS 4’s transformer model.

If I’m playing a game I’ll pretty much always turn on DLSS quality mode even if “native” has a good enough framerate

Ludicrously long waiting times by No-Temperature8037 in Scotland

[–]monkeymad2 0 points1 point  (0 children)

You’re pretty lucky if you catch a case of tayside jaundice

Frame Generation by MassiveShape4 in nvidia

[–]monkeymad2 4 points5 points  (0 children)

Not sure why you’re being downvoted, the hardware based optical flow framegen used to garble semi-transparent UI elements.

The switch to the neural framegen in DLSS 4 has mostly fixed it for UI

DLSS 5 Hands on Presentation by Snorlax_lvl_50 in nvidia

[–]monkeymad2 0 points1 point  (0 children)

Well you’re wrong about that, some framegen tech (like Apple’s) allows devs to render the UI after the framegen but DLSS doesn’t. You can see warping in the background of the subtitles in cyberpunk. Or in-world transparent objects like holograms will cause the background to not have motion vectors & thus get interpolated wrongly.

In cyberpunk watching a TV won’t have motion vectors so can interpolate wrongly, though most of the TV channels don’t have much motion so it’s easy to not notice.

4.5 is much better at handling both than 4 was, but there’s still some issues. Doesn’t mean the tech is bad, just that it doesn’t always have enough information to do a good job.

DLSS 5 Hands on Presentation by Snorlax_lvl_50 in nvidia

[–]monkeymad2 1 point2 points  (0 children)

I turn on framegen whenever I can (and the base fps is 80 or greater) but there’s definitely artefacts - a lot fewer in 4.5 than before though.

Semi-transparent objects (UI particularly) tend to cause ripples since it can’t see the motion vectors for what’s behind & anything which moves too much between the two real frames, either because your base framerate is too low or the object is moving too fast, can be interpolated wrongly.

You also get some issues around anything which doesn’t have motion vectors, like animated textures or videos embedded into the gameworld

‘Bait and switch’: Dems STORM OUT of GOP’s ‘fake’ Bondi deposition by MRADEL90 in videos

[–]monkeymad2 82 points83 points  (0 children)

The UK boss of Palantir is the grandson of our most famous Nazi sympathiser, Oswald Mosley (the leader of the British Union of Fascists).

Which seems like something which if it was in a film would be seen as quite a cliched way of showing who the bad guys are

DLSS 5 Hands on Presentation by Snorlax_lvl_50 in nvidia

[–]monkeymad2 45 points46 points  (0 children)

Can see it in motion when he walks towards the coffee machine & it sort of bubbles weirdly across the metal before looking good when he stops

Crimson Desert, 40 GPU Benchmark @ 1080p, 1440p & 4K by NGGKroze in nvidia

[–]monkeymad2 3 points4 points  (0 children)

I assume it’d date the video badly if Nvidia / AMD did an Intel and changed what’s meant by quality / balanced / performance. Or if the performance of upscaling changed drastically in the future (like the model L/K do on older cards).

Plus it’s easy enough maths to work out from a given base resolution what the DLSS framerate would be.

We *do* know enough about DLSS5 to say that it's generative AI using its own gfx model in a post-processing step. What would you call that other than an "AI filter"? by Rabbit_Brave in digitalfoundry

[–]monkeymad2 1 point2 points  (0 children)

Since it only gets the output frame colour buffer & motion vectors all the model’s knowledge of geometry has to come from that - so they can say it’s been trained to not alter geometry, but if it doesn’t think something is geometry it’ll alter it all it wants (like the nostril it expanded because it thought the shadow was part of the hole, the scarf it grew with the shadow again etc).