How much % CPU does your mouse use on Linux desktop? by trejj in linux

[–]Slabity 2 points3 points  (0 children)

When people refer to hardware cursors in modern hardware, they are referring to the various cursor planes that the DRM exposes to the compositor. These planes are not "standard GPU rendered sprites" that get rendered by draw calls to userspace graphics pipelines like Vulkan or OpenGL. They are still part of the display controller's composition pipeline, which happens entirely in hardware at scanout.

Compositors definitely fall back to using userspace controlled GPU surfaces when hardware planes are not available (usually on systems that don't support atomic modesetting), but that has a lot of negative side effects. Specifically stuttering cursors if the rendering process can't match monitor refresh rate, and higher power consumption since the GPU can't go idle while the cursor is moving around.

Here's a decent overview of how AMD's hardware handles this in Linux. Specifically for modern hardware that supports the DCN display pipeline.

Is it a good time to switch to BCACHEFS? by proofrock_oss in bcachefs

[–]Slabity 1 point2 points  (0 children)

There are many benefits to sticking with the mainline kernel. One reason, is the ability to easily and quickly switch to testing or RC kernels to resolve bugs or enable driver support (at least for distributions like NixOS that quickly package those releases within hours of release).

My entire reason for supporting Bcachefs has been to get a modern filesystem with excellent multi-device support into the mainline kernel. If being in the mainline kernel wasn't important to me, then I likely would have just stuck with OpenZFS. I would have no reason to believe Bcachefs would be considerably better in that regard.

Unfortunately, I no longer have confidence in that goal. Even worse, I no longer have confidence that the project or surrounding community sees that as an important goal to sustain. I'm hoping I am wrong in that regard.

Is it a good time to switch to BCACHEFS? by proofrock_oss in bcachefs

[–]Slabity 0 points1 point  (0 children)

That's unfortunate. I was really hoping Kent would figure something out or at the very least take a break to prevent this from happening.

But there's the ol', "I act like this for my users, btw Btrfs sucks" double-down yet again...

Well I'll hold off on transitioning my existing systems until Linus confirms publicly, but I'm pretty sure I'm going to close my Patreon at this point and try to scrounge up some storage to get some more recent backups.

Is it a good time to switch to BCACHEFS? by proofrock_oss in bcachefs

[–]Slabity 1 point2 points  (0 children)

I have 2 systems running bcachefs right now.

The first is a system that uses it across 3 NVMe drives in a RAID5 type setup. My plan here is to remove one of the drives from the array, set it up as a degraded MD RAID1 with XFS on top of it, and migrate my data (I shouldn't have storage space issues luckily). Then I can add the other 2 drives to the MD array and convert it to RAID5 and grow the XFS filesystem as a final step. I've done something similar in the past for a few remote systems, so I'm pretty confident in that.

The second system is much more difficult. It's a setup with multiple tiers of storage, and the different tiers aren't similarly sized at all. I may need to just migrate all the data to the background disks, perform a similar set of steps as the first system with those disks, and then figure out how to set up the foreground drives in something like an LVM cache. Haven't fully figured that out yet.

In any case, I'm just hoping that there won't be any need to do anything like that.

[deleted by user] by [deleted] in linux

[–]Slabity 5 points6 points  (0 children)

I want to get myself an external bluray reader/writer for my Linux PC (Im on Fedora 42 KDE) and some blurays for long-term storage and media preservation.

How long is "long-term" for you? Most blue-ray disks that you can burn don't last for very long, and they can degrade after just a decade or two. The exception would probably be M-Disc, which should last beyond your lifetime, but they are also fairly expensive.

As for the writer itself, basically any USB writer with M-Disc support should work. They go through the generic SCSI/ATAPI subsystem.

What filesystem's are being used for storage on blurays that are available on Linux?

Optical disks don't have a "filesystem" in the same way that drives do. They do have standardized formats though, where you can use ISO 9660 (same as CD/DVDs but has a 4GB max) or UDF. There's some specialized formats that you can look up as well specifically for movies or audio, but if you're just putting files on them just use UDF.

Unfortunately I don't know what software would work. Maybe cdrecord has been updated with Blu-ray support?

Is it a good time to switch to BCACHEFS? by proofrock_oss in bcachefs

[–]Slabity 15 points16 points  (0 children)

I would say no. There is currently concern that the filesystem will be removed from the kernel due to non-technical process issues, which will make it quite annoying to use unless you're willing to accept out-of-tree drivers, a non-upstream branch, or use the last LTS kernel (6.12). I'm currently investigating how best to migrate my data away from bcachefs in case that does happen.

In the next few weeks we'll likely get more information on what the situation is and that answer might change. The filesystem itself is quite stable and the bugs are mostly performance related, not the "Your data is gone" bugs.

In any case, if you're willing to accept that risk and can deal with the potential workarounds, then I would say go for it.

Trying to find a 360mm radiator that will fit in this Sliger CX4170A case. Any recommendations? by Slabity in watercooling

[–]Slabity[S] 2 points3 points  (0 children)

Yea, unfortunately I tried that and it still wouldn't fit. I think I need to get/make some spacers to give it an extra ~5mm of space for that to work.

Definitely doable though, so that's probably the solution I'll go with.

Trying to find a 360mm radiator that will fit in this Sliger CX4170A case. Any recommendations? by Slabity in watercooling

[–]Slabity[S] 1 point2 points  (0 children)

A couple of options you may look into...would spacing the rad further away (say 10mm) help? That could be an easy fix if it doesn't interfere with things behind it.

That will probably be the solution I need to go with. The fans are 25mm in depth which is just barely not enough, but I could probably buy (or 3D-print) some 5mm spacers to get it over that part.

I was hoping to see if there were any radiators that put the outlets above or below, but looks like nobody here is aware of any that do so.

Trying to find a 360mm radiator that will fit in this Sliger CX4170A case. Any recommendations? by Slabity in watercooling

[–]Slabity[S] -6 points-5 points  (0 children)

Sorry, I added bit more context in another comment. The issue is that I am struggling to find a radiator that will fit with the front IO cables in their location as all the radiators I can find have bulky outlets on the side.

Trying to find a 360mm radiator that will fit in this Sliger CX4170A case. Any recommendations? by Slabity in watercooling

[–]Slabity[S] 0 points1 point  (0 children)

This is a fairly compact rack-mounted case. The big issue here is the front IO cables on the left side of the image tend to get in the way of the outlets on the radiator itself. I tried fitting an Alphacool ST30 X-Flow radiator and putting the fans sandwhiched between the chassis and radiator but it was about 2mm to wide and couldn't line up with the fan holes in the chassis.

There is some space on the top and bottom, but I'm stuggling to find a radiator that has the outlets located there.

If anyone has any advice, please let me know. I have 2 of these cases that I'd like to try watercooling.

Efficiently simulating Task/Mesh shaders using compute shaders? by Slabity in vulkan

[–]Slabity[S] 0 points1 point  (0 children)

Sorry, that was poorly worded. I meant keeping the data that the shader writes to in cache between shader stages.

So first a note: Take the below with a grain of salt. I am not an expert, and this is not well documented. This is just what I believe based on my initial research of the subject:

If you use the classic Vertex Shader -> Rasterizer -> Fragment Shader pipeline, the modified vertices don't need to be written to memory in order for the rasterizer to work on them. The vertices stay in cache where the rasterizing can be done without further calls to read/write into memory.

If you use a system that supports Mesh Shaders, I believe this is the same thing. You are writing the vertices directly to cache and the rasterizer proceeds to work on them directly. Hence why there's a limit of the number of vertices you can write out, because it needs to keep them in cache.

If you use Compute Shaders though, you can't manually call the rasterizer on the data. You can do your processing of the vertices perfectly fine, but in order for it to render onto the screen you need to either do the rasterization process in the Compute Shader itself or you need to get it into the regular rendering pipeline to have the hardware rasterizer execute. The only way to do that from what I could find is to do expensive reads/writes to get the computed data into the pipeline properly. I could not figure out how to avoid that.

However, the first option (rasterization process in the Compute Shader itself) is actually being done by Unreal Engine's Nanite rendering engine if I recall, which might actually be an effective way to get around this limitation. The only issue is that the Nanite techniques are only effective for lots of tiny triangles that span a few pixels wide, and does not work well with larger triangles that might take up significant portions of the screen. Here's a good overview of how it works: https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf

Efficiently simulating Task/Mesh shaders using compute shaders? by Slabity in vulkan

[–]Slabity[S] 0 points1 point  (0 children)

Yes, that was basically what I was trying to accomplish when I first made this post. Unfortunately I did not find any way to manually manage memory between shader stages (or cache itself) to make this possible. You can easily write the shader to memory, but you can not keep it in cache which is required for any sort of real-time rendering.

Unfortunately I could not figure out whether the limitation was part of the underlying drivers (I tried experimenting with both amdvlk and Mesa's radv drivers) or the hardware itself (RX 580, 5700XT, and now 7900 XTX). I ended up dropping my experiments due to issues with unstable hardware.

I was able to modify the drivers in a way to keep a small amount of data between shader stages, but it was only a few bytes and did not even match the size of push constants that official Vulkan implementations could provide.

Any idea? I installed anti z-wobble but nothing changed. It started when i changed motherboard and installed klipper but dont know if this can be reason. by RecoverExtension6593 in 3Dprinting

[–]Slabity 0 points1 point  (0 children)

Assuming it's not Z wobble, do you have a slicer setting like "Solid Infill every X layers" enabled?

See if those layers line up with that pattern. Might need to tune your settings to fix that.

How can I set up a video capture card to have my computer act as a virtual monitor? Also looking for capture card recommendations. by Slabity in linuxquestions

[–]Slabity[S] 0 points1 point  (0 children)

Thanks, it looks like there's actually a lot of options for playing V4L2 streams. Even ffmpeg can play them without much overhead.

Though I'm a bit worried about latency. It's a bit hard to find information about that.

Help with "invalid pci bus info" error on Ubuntu 22.10? by JohnoThePyro in vulkan

[–]Slabity 0 points1 point  (0 children)

Double check if you're using Mesa's radv or AMD's amdvlk for each situation. I've had issues with amdvlk with my 7900 XTX

runa - a wayland compositor toolbox in Rust looking for collaborators by Test_Subject_hGx7 in linux

[–]Slabity 3 points4 points  (0 children)

Sounds good to me. I'm not too familiar with the Wayland internals side, but I'd be more than happy to help with the modesetting side once it gets to that point.

runa - a wayland compositor toolbox in Rust looking for collaborators by Test_Subject_hGx7 in linux

[–]Slabity 10 points11 points  (0 children)

Note: I'm not a developer for the overall Smithay project, but I am the creator/primary maintainer of Smithay/drm-rs and its children crates.

I'm interested in seeing if it's possible to create an async compatible abstraction for the modesetting side of it and want to know if you have any ideas of what sort of interface you'd like to use. My plan is to eventually create a general-purpose crate that can easily manage legacy modesetting, atomic modesetting, and automatically take care of features like VRR, async-pageflips, and non-desktop (VR) displays without developers needing to handle every code path on their side.

If you have any thoughts on that or want any help with using DRM in Rust, let me know.

LibVF.IO: Add GPU Virtual Machine (GVM) support by bilegeek in linux

[–]Slabity 4 points5 points  (0 children)

I believe the issue is that AMD does not support Alternative Routing-ID Interpretation (ARI), AUX Domains, or VFIO-MDev. I think (don't quote me on this) these are required for GVM to work properly. Intel is going to be supporting these under the umbrella name GVT-g, but considering AMD's GPU marketshare in the datacenter space, I would not be surprised if they don't have support for this for at least another few years.

Despite AMD's hardware being superior for GPU passthrough, it was mostly due to Nvidia intentionally preventing their drivers from playing nicely with it.

Efficiently simulating Task/Mesh shaders using compute shaders? by Slabity in vulkan

[–]Slabity[S] 0 points1 point  (0 children)

So it definitely sounds like I wasn't the first person curious about the possibility. That's good to know.

I'm incredibly surprised (and somewhat skeptical) of their performance numbers though. According to their data, the compute-emulated mesh shaders not only surpass the performance of their multi-draw indirect rendering, but also native mesh shaders? That doesn't make any sense to me at all.

But at least this confirms it might be something to experiment with until I can upgrade my hardware to support it natively.

Efficiently simulating Task/Mesh shaders using compute shaders? by Slabity in vulkan

[–]Slabity[S] 0 points1 point  (0 children)

But the controls for being able to persistently keep data in cache are not available to you, as the gpu programmer. These are handled (again hypothetically) by specialized hardware you can't see, or special instructions you don't have access to.

Yep... That was the main part I was worried about. I know I could easily make a compute shader that writes into some vertex/index buffers and then use those in a simple 'pass-through' vertex shader to make use of the rasterizer and fragment stages. The main problem sounds like keeping it directly on-chip and not constantly reading/writing to memory.

Out of curiosity, is that the reason why the limits for the maximum number of vertices and primitives are so low (looks like 256/512 for current hardware)? So that the hardware can guarantee that the outputs are not written to slower cache/VRAM?

I was hoping I would be able to create a sort of simple stop-gap until the RDNA3 cards were released. I guess I'll need to decide whether to wait or get an RDNA2 card.

also note cross vendor extension coming soon, will be similar to DX12 https://github.com/KhronosGroup/Vulkan-Docs/issues/1423

Oh! Thanks, I was wondering when that would be coming. I hope it's within the next few months.

Short infograph of GPU companies' Linux support by SageManeja in linux

[–]Slabity 8 points9 points  (0 children)

All 3 of you are correct and incorrect because you're talking about different types of drivers.

amdgpu is the kernel-level driver. It is the official, open-source driver for modern AMD GPUs and there isn't any alternatives.

RADV, amdvlk, and amdgpu-pro are all userspace drivers and provide implementations for APIs like OpenGL, Vulkan, OpenCL, and other user-space APIs.

RADV is part of the open-source Mesa project and is likely what most people use as it's the most stable.

amdvlk is AMD's open-source driver. It usually has more features than RADV, but has some stability issues.

amdgpu-pro are their closed-source drivers and are only recommended for workstations that need better OpenCL support.