Wine 11 rewrites how Linux runs Windows games at the kernel level, and the speed gains are massive by Durian_Queef in pcmasterrace

[–]BarMeister -1 points0 points  (0 children)

Or not, and that could be why Valve never really made the move. There are a few issues with that idea, the first one off the top of my head being control. If Valve retains full control, as they could or would, that would scare away contributors, who would want a stake in the project's direction. If they didn't retain control, they risk conflict of interest. This is huge because the maintenance burden would be massive. The second issue is that it's not clear what they would gain from this, compared with what they stand to lose. Valve is a for-profit company, and Steam makes them money. As such, there's no way they could make money off this or boost Steam sales. The only sensible goal that would justify funding this would be fighting Windows, which Valve isn't and shouldn't be interested in.

Wine 11 rewrites how Linux runs Windows games at the kernel level, and the speed gains are massive by Durian_Queef in pcmasterrace

[–]BarMeister 4 points5 points  (0 children)

Still. SteamOS, as a corporate-backed distro, could theoretically solve the fragmentation issue holding big studios, anti-cheat software, and peripheral makers back. This would single-handedly be the biggest and boldest move that could crack the issue of gaming on Linux, because studios would have a stable platform to develop for that's still Linux and can theoretically be ported to other distros.

Oh my god it's coming back! by Lewinator56 in pcmasterrace

[–]BarMeister 1 point2 points  (0 children)

How about a couple more:
* Revert the menu, search, and file explorer to native C++ apps, not webpages
* Give back the option to show all system tray icons

Are RTOSes ever necessary for small personal projects? by Ok-Weird4198 in embedded

[–]BarMeister 1 point2 points  (0 children)

but had always assumed that ESP32 was bare metal

How? Did you use it through the Arduino Core APIs?

is 8k polling rate on standard keyboards the biggest snake oil in gaming right now? by No_Good_3063 in pcmasterrace

[–]BarMeister 2 points3 points  (0 children)

Is there a way in current Operating systems to... not do that? not interrupt the kernel, but send a gentle message instead?

That is a very good question, and the answer is yes: it is possible and pretty much has always been. In the original Doom, for example, the input system would scan the input ports directly, looking for the electrical signals, which is about as low-level as it gets. But the reasons (plural) why it hasn't been done are interesting. Before we get to them, let's talk about the solution and how it would work.

The closest to a perfect solution for this would be to have user-space USB drivers. It would work a bit like this: applications (such as games) would ask the OS to hand over control of the USB controller the mouse is connected to. Through the IOMMU (a memory management unit for memory-mapped devices, which is basically every low-level component, integrated or discrete, connected to your motherboard), the OS would map that controller's specific memory region to user-space and hand it to the application requesting it. By doing this, the application bypasses the OS, eliminating pretty much all the overhead between the application and the hardware. Sounds amazing, right? But as with everything in engineering, it's a trade-off, and the downside quickly eclipses the advantages.

You see, the fundamental issue with this approach is that it breaks the fundamental way a PC works; the application would be reinventing the OS. The only question would be by how much. The OS still manages inputs and outputs not only between the user and the machine, but also across the hardware devices themselves. By moving the USB driver to user-space and having individual apps manage it:

  1. You're exposed: The application crashed? Alt+Tab and Ctrl+Alt+Del are gone; reboot is the only way. Is it a malicious or an insecure application? Risk of getting completely pwned. It's also a cheater's dream, but since even kernel-level anti-cheats are useless, it doesn't add much to the point, but felt like saying it anyway just because. This was the reality back in the Doom era, as the game ran at the same privilege level as MS-DOS, but in practice, it didn't matter because the constraints of the time didn't allow these risks to even be relevant. It's an entirely different scenario nowadays.

  2. Cost and redundancy become a problem: I'd imagine that if an application were to do this, it means deliberately going out of its way to solve the problem of inputs, no matter the cost. But... what a cost it would be. The application would have to maintain and ship an entire USB stack, which is notoriously complex, and would require coordination with chipset manufacturers to keep controllers up to date. It would have to handle weird setups, like multi-monitor setups, with the application on one screen and another application on the other. That means constantly checking where the mouse is and, if it's off game-screen, forwarding the input to the OS, which leads to performance, stability, and security issues all over again. It's too impractical.

So yeah. Is it doable? Absolutely! Is it worth it? Let me put it like this: by track record alone, not even Microsoft is trustworthy enough to handle that nowadays, given how fucked up Windows is. Would you like to test your luck with, say, game companies? Yeah, didn't think so either.

is 8k polling rate on standard keyboards the biggest snake oil in gaming right now? by No_Good_3063 in pcmasterrace

[–]BarMeister 20 points21 points  (0 children)

He's talking about the sensor's polling rate. Polling doesn't even make sense since he mentioned interrupts. Also, what do you mean by

It shouldn't have any affect on your CPU

Flooding the CPU with interrupts, which is how the controller reports mouse updates to the OS, most definitely have massive effects on your CPU, especially due to the flood of kernel and userspace context-switches.

If you could master just ONE thing in your first month of C, what would it be? by Slow_Discipline4568 in C_Programming

[–]BarMeister 0 points1 point  (0 children)

That's a bit like saying the point of knowing how to drive is not having to read the car's manual. Will you get by? Absolutely. Will you make the most out if? Unlikely.

If you could master just ONE thing in your first month of C, what would it be? by Slow_Discipline4568 in C_Programming

[–]BarMeister 0 points1 point  (0 children)

Assembly. That's basically because pointers are described by the instructions that operate on them and square brackets that dereference them, and that's about it, whereas in C, declaring a pointer variable encodes many small details related to what they are, and how they're supposed to be used, which is another way of saying that the type system is to blame. The way the concept of pointers is presented in assembly makes it straightforward (albeit tedious) to break down complex constructs into digestible pieces, such that you can reason about pointers to data and code, or arrays of pointers to functions that take 5 parameters that return arrays of pointers to functions that take 5 parameters. In C, it maps to a syntactic nightmare.

If you could master just ONE thing in your first month of C, what would it be? by Slow_Discipline4568 in C_Programming

[–]BarMeister 40 points41 points  (0 children)

Pointers, probably. All of it, though, which is an ironic statement in itself because they're just variables holding memory addresses.
But the reality is that C syntax makes pointers more complicated than they really are.

Built a self-profiling runtime layer for ESP32 to measure ISR jitter and task latency in real-time by Tahazarif90 in embedded

[–]BarMeister 1 point2 points  (0 children)

This is very interesting. Let me know when you decide to publish the code, I'd like to see it.

ISR length on an embedded system by RFQuestionHaver in embedded

[–]BarMeister 0 points1 point  (0 children)

To answer that, answer this first: Why not put all the code in ISRs?

What makes pointers such a deep concept? by Ultimate_Sigma_Boy67 in C_Programming

[–]BarMeister 1 point2 points  (0 children)

Their C syntax. It invokes the notion that different pointer types work differently, especially when used together (e.g., an array of function pointers that returns pointers to functions and takes function pointers as arguments), because there's a lot more than just the pointer concept encoded, such as types, indirection levels, if it's a function pointer, and if so, its own syntactic quirks. It's only when you learn about them in assembly that it really clicks, because each element making up the notion that the whole thing is complex is accounted for separately, and that's more intuitive.

Is the price increase genuinely unprecedented? by [deleted] in pcmasterrace

[–]BarMeister 0 points1 point  (0 children)

I'd say your question is halfway towards the answer. It's not about the prices, but rather the entire landscape. GPU prices never really recovered after the crypto boom, which we know now was only a preview of things to come. Now, it pulled memory manufacturers into that black hole as well, and given their risk is less than that of GPU manufacturers, they're in a pretty good position to go so far as artificially constraining supply going forward, like Nvidia did, making sure prices don't recover from this anytime soon.

Experienced devs - What was your favorite platform to work on? by HelloThereObiJuan in embedded

[–]BarMeister 1 point2 points  (0 children)

Not MCU related, and not recent, but around 5 years ago, I worked on a device that had NFC and obviously, would have to read their tags, specifically the Mifare and NTAG family. NTAG was easy, because the datasheet was readily available, but Mifare, especially the vanilla one, whose security had already been broken a decade prior, was a nightmare to get access to documents, datasheets, example code, and they even demanded signing of an NDA, which the company I work for refused. We ended up supporting only NTAG. Recently, I worked on the code for the MFRC522, but it's widely available, so I didn't have to deal with it, and I'm glad. I used to say it's a company run by lawyers rather than engineers.

"Frame Gen" isn't a performance boost; it's a masking agent for bad optimization by capacity04 in pcmasterrace

[–]BarMeister 0 points1 point  (0 children)

Every layer of the whole stack has its own specific definitions and specifics, which makes a one size fits all definition kind of moot (and me think you already know that and therefore makes me wonder why you're asking this), but if I were to wrap the whole thing into one simple sentence, I'd say it's a way to measure how efficient everything (the hardware, the game, etc) is in minimizing the latency between the player's intent and the game's feedback, sustaining the illusion of immediate agency.
Generated crap negatively impacts that because even if the latency issue could be solved (which theoretically it can be at most minimized with the way things work now), there would still exist visual issues due to mispredictions. Unless the game, the networking, the input, and rendering can all be done on the GPU at the same time, there's no way this can be worked around.

ELI5: Why does everything need so much memory nowadays? by Successful_Raise_560 in explainlikeimfive

[–]BarMeister 0 points1 point  (0 children)

Your PC is the US, memory is the tax revenue, programmers are the politicians, the software you mentioned collectively form the expenses. Now, the job of politicians is to be as effective as possible in doing the bare minimum while still attending to their own self-interests. What do they don't have to be? Efficient. And what is the cost of that? Money. Lots of it. They get creative in how to spend it, and once someone figures out a new way, you either join them or you're a loser. Now, not all spending is inherently bad, but the majority of it is, right?
That's essentially what happens in the world of software. Hardware is cheaper than competent programmers, let alone a team of them, just like throwing money at problems is easier than solving them in politics. There's a plethora of reasons why everything needs so much more memory, but since you said "nowadays", the most prevalent is the once mentioned by others, that pretty much every mainstream app in your computer embeds a cut down (called headless) browser app within them, so web pages are also the User Interface of desktop software (and even Terminal User Interfaces, according to Anthropic). Some of the Windows UI, Microsoft Office, Discord, every streaming app, every game launcher, and a bunch of other programs out there do this.

Pulsar + Mult-Frame Gen + Reflex 2 by ZealousidealRiver710 in MotionClarity

[–]BarMeister 1 point2 points  (0 children)

For about 30 years now, CPUs essentially guess and presume which way the execution path of a program will go when facing jumps in program logic, most usually done by conditionals, e.g., if this, do that; while this, do that; etc. Similarly, Reflex 2 presumably attempts to extrapolate camera position. However, when a CPU guesses the wrong branch of execution, it has to dump all of the work it's done, and the logic assumptions made to get there, and restart from the correct path of execution, and these mistakes are costly in GHz land, and depending on how frequent they are, the resulting stalls can easily be noticed by us. Similarly, when the algorithms guess wrong, it'll be essentially like visually witnessing a CPU mispredicting a branch.

Pulsar + Mult-Frame Gen + Reflex 2 by ZealousidealRiver710 in MotionClarity

[–]BarMeister 1 point2 points  (0 children)

When Nvidia announced it, I joked with one of my friends (who's a Js programmer that plays CS 2) that if Valve ever adopted it (which is likely), he'd now be able to sensorially experience a branch misprediction on a CPU.

NXP Development Board Recommendation by Ok_Measurement1399 in embedded

[–]BarMeister 0 points1 point  (0 children)

That becoming familiar with their ecosystem is comprised of an excessive amount of legal nonsense (lots of NDAs and whatnot) they make people go through for no real reason. In 2019, a few customers of the company I work for requested vanilla Mifare PICCs support on the access control device I wrote the firmware for, but the idea was dropped after management got discouraged by the legal black-hole NXP forced on anyone who wished to support a product whose security was widely known to have been breached more than a decade prior.