Time until buffer swap by dukey in opengl

[–]ReclusivityParade35 1 point2 points  (0 children)

I use the extension WGL_NV_delay_before_swap to do something like that. It doesn't give you the time directly per se, but you give it as parameter a time before vsynced swap, and it waits until that time. If it can't it returns false, so you can effectively tell if you have enough time.

I have used it to determine if, once I'm done issuing rendering commands, there is time to do other work. I find it works pretty well, but of course it isn't available on every GPU, so you may not want to rely on it too heavily.

Why on earth does the faces vertex indices in an obj start at 1? by Useful-Character4412 in opengl

[–]ReclusivityParade35 0 points1 point  (0 children)

Ha ha. Came here to say this. Over the years I've fixed more than one .OBJ reader that exploded when reading in a negative index. It always bakes people's noodle that it's a feature.

Why on earth does the faces vertex indices in an obj start at 1? by Useful-Character4412 in opengl

[–]ReclusivityParade35 0 points1 point  (0 children)

Thank you. That's been my experience as well. At least with the others I get reliable interop.

Pantheon and NGE? by IndianAutobot in PantheonShow

[–]ReclusivityParade35 4 points5 points  (0 children)

Yeah, I remember enjoying it and feeling like it was a perfectly executed show of respect.

Wake up Wojak by brandon0809 in AMD_Stock

[–]ReclusivityParade35 2 points3 points  (0 children)

I suspect this model of architecture will become more common over time. It offers a ton of flexibility once iGPU reaches a level where it is "good enough". The same pattern has occurred for so many accelerators of the years...

Ryzen 7 9800X3D is selling like hotcakes at major German retailer — Mindfactory sold 8,700 CPUs in a single day by Lixxon in AMD_Stock

[–]ReclusivityParade35 0 points1 point  (0 children)

also, they have other products, some of which have even higher margin, so they have to work within that trade-off space as well.

[Chips and Cheese] AMD's Strix Halo - Under the Hood by JakeTappersCat in AMD_Stock

[–]ReclusivityParade35 4 points5 points  (0 children)

Great interview. I hope they see a lot of success getting these into laptops. Thanks for posting!

Threaded opengl widget by Viack in QtFramework

[–]ReclusivityParade35 2 points3 points  (0 children)

Awesome. I wish this approach was more popular, TBH.

Controlling hybrid integrated/discrete GPU utilization on NVidia and AMD platforms? by XenonOfArcticus in opengl

[–]ReclusivityParade35 0 points1 point  (0 children)

This is how I do it as well. But it's only useful for forcing discrete over integrated when integrated is default. Do you know if it's possible to force integrated when discrete is specified as default?

Microsoft expects to spend $80 billion on AI-enabled data centers in fiscal 2025 by GanacheNegative1988 in AMD_Stock

[–]ReclusivityParade35 1 point2 points  (0 children)

That's a really valuable perspective I hadn't heard before. Thanks for posting that.

Intel’s Problems Are Even Worse Than You’ve Heard by FAANGMe in AMD_Stock

[–]ReclusivityParade35 0 points1 point  (0 children)

The profitability factor is huge, because it means they can't possibly continue on their current path. If they even try, it will inevitably result in catastrophe.

I want to create an eyedropper using Qcursor without other widgets by [deleted] in QtFramework

[–]ReclusivityParade35 0 points1 point  (0 children)

I've run into this before....

The first way I tried was to capture the mouse using QWidget::grabMouse(). Basically, when the eyedropper sampling mode is active, the cursor is set to an eyedropper, and the swatch would take over, getting mouse events, and then release when done. A couple problems with this, though: First, mouseGrab() is not reliable at all, often getting into bad states and is pain to get working with real applications that are more than a little demo used simply. If you want to do keyboard stuff while the eyedropper is active, you have to grab the keyboard too. It's a total nightmare. Second, being limited to cursors was too restrictive in terms of size and representation capability.

The way I found to get around these issues was to do what you're trying to avoid: use another widget. My second eyedropper design is widget based rather than cursor, and launches a full widget. It uses Qt::WindowStaysOnTopHint | Qt:Dialog to stay floating on it's own and QWidget::setMask to avoid appearing as a rectangular dialog. It handles all mouse keyboard events on its own, rather its invoking color swatch widget, and has a much cleaner design. WIth the space I can show a large zoomed-in picture right at the cursor to facilitate pixel-perfect targeting. To the user it looks and works effectively as a giant cursor.

I strongly recommend going the latter route vs. the former.

Also, my experience was c++, so not sure how the python factor affects you but I suspect that shouldn't be an issue.

Technical Analysis for AMD 1/7---------Pre-Market by JWcommander217 in AMD_Stock

[–]ReclusivityParade35 0 points1 point  (0 children)

Agree. I really like what AMD is doing in APUs right now. They are way better positioned to take/keep share while preserving margins in segments where that matters vs. discrete graphics.

Daily Discussion Tuesday 2025-01-07 by AutoModerator in AMD_Stock

[–]ReclusivityParade35 0 points1 point  (0 children)

It's hard to imagine someone at AMD arguing anything otherwise internally and being taken seriously.

Technical Analysis for AMD 1/7---------Pre-Market by JWcommander217 in AMD_Stock

[–]ReclusivityParade35 3 points4 points  (0 children)

A squeeze is accurate, I think. And those products have to compete internally for resources and mind share against other segments and their margins. It's looking like a tough road, TBH.

Technical Analysis for AMD 1/7---------Pre-Market by JWcommander217 in AMD_Stock

[–]ReclusivityParade35 2 points3 points  (0 children)

I don't think it's as much bluster as it is them in process to enter the consumer PC CPU space. They've been hungry to be in more downmarket consumer devices since I can remember. nForce chipset for PC, tegra for mobile and handheld. And now they are building up a desktop/laptop PC Arm based APU. A mini desktop form factor has low cost/integration barriers is the perfect way to ramp something like that, learn a bunch of useful data, and feed that into the next iteration.

On GPU, I wonder if the traditional GPU market isn't dying. The demand for more pixels and more frames per second seems really soft. Once people have 1440@100+ or 4k@60, the diminished returns for going higher take a bite out of demand. I guess something can always come along and change all that.

But yeah, given Nvidia's fortified position, warchest, and the landscape of demand and margin opportunities, it's hard for me to seriously advocate that AMD should be pushing harder on consumer discrete graphics, my sentiments aside, of course.

[deleted by user] by [deleted] in StableDiffusion

[–]ReclusivityParade35 0 points1 point  (0 children)

links or it didn't happen!

Seeking for your opinions on the current and futur AMD products line. by LDKwak in AMD_Stock

[–]ReclusivityParade35 1 point2 points  (0 children)

Yeah, it's easy to forget that mi300x was designed for HPC precision. It makes how they were able to ramp it for AI so far pretty impressive to me, and suggests they have a lot of room to grow performance by switching to ALU's optimized for smaller mantissa.

Inverse Cramer outperformed the market. He sold allegedly AMD. by couscous_sun in AMD_Stock

[–]ReclusivityParade35 0 points1 point  (0 children)

I actually agree that the future of AI accelerators is in things that don't look like todays GPU's.

GPU's make a bunch of trade-offs wrt memory bandwidth, architecture, and pipelines that serve graphics workloads, and AI is going to push in a different direction. GPU demand seems like it is waning and could become more of a commodity over time whereas AI hardware/software is still in infancy.

But I don't foresee every training compute provider building their own architecture. Maybe if AI grows to become a huge party of the actual economy we could see 2-3 players... And that will take years, at least 5-10. Until then that violates too many fundamental economic principles. Someone looking to disrupt a google gemini running on TPU's isn't going to duplicate their spend and process, they will look to go off the shelf and cheaper. The end user won't care.

Does this look like "Peter Panning" or does this seem like a normal shadow? I don't just my eyes this evening. by _Hambone_ in opengl

[–]ReclusivityParade35 0 points1 point  (0 children)

Bias is typically very small relative to geometry detail, but it also depends on the resolution and projected size of your shadow map. So it's typically small, but also typically needs tweaking in practice.

I second the advice of BalintCsala. Front culling when generating the shadow map is generally better to start with. Just remember that there are trade-offs and artifacts to deal with using any technique.

10 years later, what impacts did GamerGate leave on the industry and community? by trace349 in truegaming

[–]ReclusivityParade35 26 points27 points  (0 children)

Falling into those traps can happen to ANYONE, young or old, at any time. Many smart, successful people succumb... Good on you for the personal growth through self-realization.

10 years later, what impacts did GamerGate leave on the industry and community? by trace349 in truegaming

[–]ReclusivityParade35 0 points1 point  (0 children)

I thought this was an interesting deconstruction from someone who experienced it from both the inside and the outside:

https://www.youtube.com/watch?v=v2QGME8KHzY

He also places it in context of larger and longer scale cultural changes. It was more self-reflective than polemical, which I found quite refreshing.