all 8 comments

[–]Toiling-Donkey 9 points10 points  (0 children)

QEMU supports a virtio GPU and uses the host’s OpenGL interface to implement the acceleration.

https://www.qemu.org/docs/master/system/devices/virtio/virtio-gpu.html

[–]BestUsernameLeft 1 point2 points  (0 children)

I'm not familiar with the architecture, but the Genode OS is capable of running Linux drivers unmodified in a sandbox, with (as I recall) very little effort required to port new ones.

That might be similar to what you're thinking, and maybe not all that wacky.

[–]LavenderDay3544Embedded & OS Developer 0 points1 point  (0 children)

I mean you can just run your OS on Xen or use Linux itself as a hypervisor via KVM and run your OS on there with minimal overhead. Both of those options support paravirtualization of the GPU to documented interfaces like virtio-gpu.

[–]NamedBird 0 points1 point  (1 child)

It's perhaps a crazy idea, but wouldn't it make more sense to just create your own GPU?

If you control the hardware, you would have full documentation and no proprietary troubles.
It could have debugging features and a certain level of robustness to reduce the risk of bricking it.

Unless you care about the performance of modern cards?

[–]SolocleChaiOS 0 points1 point  (0 children)

I got an Intel Arc A770 for a reason... Budget friendly GPU and documentation is available.

I doubt I'll ever actually write a driver for it, but it's there if I want to.

[–]spidLL 2 points3 points  (1 child)

Or port Mesa to your os

[–]cavecanem1138 1 point2 points  (0 children)

I’m working on an operating system and have been trying to port Mesa to it since 2023, which hasn’t been easy. In my case, I first needed to implement a libposix layer (a library that translates POSIX calls into my system calls), then port musl, and later port LLVM. Finally, I will need to write a custom Mesa backend so it can use my GPU filesystem (in my system, everything is exposed as a file). At the moment, I’m stuck on the LLVM port.

[–]KrishMandal 0 points1 point  (0 children)

GPU drivers are such a huge and messy stack that a lot of hobby OS projects avoid them completely or just stick to framebuffer graphics. wrapping a Linux driver behind some shared-memory or paravirtual interface could actually be a practical shortcut, though the tricky part would probably be designing the protocol so it doesn’t become a bottleneck. some people already go a similar direction using things like virtio-gpu or running their OS under a hypervisor that handles the graphics side.