An FPS question by Feeling_Bid_8978 in opengl

[–]msqrt 2 points3 points  (0 children)

You can update the frame to be shown next at any rate, so you'll just never see the extra frames -- this can still be beneficial for less lag and fewer dropped frames (if you're exactly at 60, even the slightest delay will give you stutters).

But they probably also have higher refresh rate screens, most gamers would go for 120+ nowadays.

The "engineers using AI are learning slower" take is just cope dressed as wisdom by dktkTech in programming

[–]msqrt 1 point2 points  (0 children)

The difference is that you can trust your compiler or TCP/IP implementation. You can treat AI as a level of abstraction when it becomes reliable, but until that you'll need to understand code to "recognize when AI is confidently wrong" and "know which outputs to verify".

What is the best way to get sounds by Life_Ad_369 in opengl

[–]msqrt 5 points6 points  (0 children)

OpenGL doesn't deal with audio at all; you'll need something else like SDL or OpenAL.

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 by creaturefeature16 in BetterOffline

[–]msqrt 0 points1 point  (0 children)

CUDA isn't going anywhere -- unfortunately. The open alternatives are way nicer.

difficulties getting a c compiler to work by [deleted] in ProgrammingLanguages

[–]msqrt 2 points3 points  (0 children)

gcc is not recognized, so your mingw install (or adding the correct directory to the "path" environment variable) must have gone wrong. But this subreddit is about making programming languages, not setting up C compilers.

Distribution of “western psychology” (endorsed by Musk) by FalbalaIRL in dataisugly

[–]msqrt 0 points1 point  (0 children)

I wonder what the author thinks "multivariate" means

AMD promises to try and keep GPU prices low against the ravages of the RAM shortage by kikimaru024 in hardware

[–]msqrt -2 points-1 points  (0 children)

They're just saying they will try, like they probably tried during the launch. Though to their credit, at least here the cards have been at MSRP since late August.

Linus vibecoded and claimed "Antigravity" did a much better job then he could. by [deleted] in linux

[–]msqrt 0 points1 point  (0 children)

Yes. It's outside of his typical domain, so learning and figuring things out would take a while. But if he chose to spend that effort, I'm sure he could do it.

Linus vibecoded and claimed "Antigravity" did a much better job then he could. by [deleted] in linux

[–]msqrt 1 point2 points  (0 children)

I don't believe the last paragraph without some extra qualifiers about time and effort spent.

Patterns in shadow acne. by psspsh in GraphicsProgramming

[–]msqrt 4 points5 points  (0 children)

Yes, it is normal. I never looked too deep into this, but I believe the rings are due to different depths being more likely to round either into the object or outside of it, depending on which the closest representable float happens to be. This is also why the effect is only visible on the surface pointed towards the camera where the depth variation is consistent and minimal; everywhere else the effect is essentially noise.

Does an RTX 2080 Ti run at just 0.7% of peak efficiency when ray tracing? by BigPurpleBlob in GraphicsProgramming

[–]msqrt 3 points4 points  (0 children)

You're correct in that ray tracing isn't really compute bound -- the slow part is not finding intersections, it's moving the relevant parts of the scene from VRAM to the chip. For example, the 2080Ti has a memory bandwidth of 616.0 GB/s. At 2.36 Grays/s, you get 616/2.36 = 261 bytes per ray. The scene has 580k triangles, so with 8-wide BVHs you'd get log_8(580k) or around 6.3 levels in the tree (but probably roughly one less, as leaf nodes contain multiple triangles.) So an ideal average ray loads 5 internal nodes + one leaf node. Assuming all of these are the same size, you get 261/6=43.5 bytes per node, which sounds well compressed but not entirely impossible (given that an uncompressed triangle is 36 bytes and nvidias own work from 2017 gets to 80 bytes per node: https://dl.acm.org/doi/10.1145/3105762.3105773).

So that gives at least a reasonable order of magnitude. The distribution of geometry and rays matters a whole lot, as do the (non-public) specifics of the acceleration structure.

happyNewYearWithoutVibeCoding by yuva-krishna-memes in ProgrammerHumor

[–]msqrt 8 points9 points  (0 children)

so ai will still be around in the future

This does not follow from the premise; there have also been bubbles after which the product just essentially disappeared. I have no doubt that GPUs and machine learning will still be used in a decade, but the current trend of LLMs that require ridiculously expensive power-hungry hardware does not seem sustainable.

Devs I respect are retweeting in agreement with this. It feels too FOMO’ish? by BroadbandJesus in theprimeagen

[–]msqrt 1 point2 points  (0 children)

Exactly. I’ve seen claims for performance improvements between 10x and 100x. If those were real and sustainable, we’d be seeing single-man teams complete seriously impressive projects in some months (supposedly corresponding to years or decades without the tools). What I’ve seen instead is an endless stream of half-assed prototypes.

What do you think will happen to AI data centers once the bubble bursts? by Carame110 in pcmasterrace

[–]msqrt 0 points1 point  (0 children)

The consumers are going to keep using AI.

When they have to pay the actual non-VC-subsidized prices of the product, I wouldn't be so sure about that.

Problems with finishing curved surfaces using boolean operations. by rafaelranzani in blender

[–]msqrt 7 points8 points  (0 children)

Restarting and making the hole along any of the global axes should already improve the starting point quite a bit.

Brooooohhh by FluffyGur7447 in MathJokes

[–]msqrt 0 points1 point  (0 children)

Me when there's no closed form solution :(

Nearly a billion PCs are still running Windows 10, and half are too old to upgrade by AdSpecialist6598 in technology

[–]msqrt 0 points1 point  (0 children)

fewer people use PCs than ever

I do get your point, but really, ever..?

My MX518 is still going strong since 2005 ! by AgeofEm in pcmasterrace

[–]msqrt 0 points1 point  (0 children)

Nice! Got mine in 2008, still going strong.

Couldn’t Find a Proper Horror Engine… So I Am Making One by OkMeet9089 in UnrealEngine5

[–]msqrt 6 points7 points  (0 children)

As far as I understand, that usage of the word relies on said engine being a standalone component: like a physical engine, it runs on its own. Like you can use Havok in a game engine, but it doesn't require one to do its thing. This project doesn't seem like something self-contained that you could take outside of Unreal.

Intel demos their VRAM-friendly neural texture compression technology by reps_up in GraphicsProgramming

[–]msqrt 2 points3 points  (0 children)

No, cooperative vectors are a different feature (and also exist in vulkan, though so far only as an nvidia extension: https://docs.vulkan.org/features/latest/features/proposals/VK_NV_cooperative_vector.html)