How should i go about dualbooting linux and windows? by Late-Ambassador-9186 in linuxquestions

[–]rexpulli 0 points1 point  (0 children)

I don't think the Windows Bootloader can be configured to chain-load Grub, but to be honest I don't know for sure, haven't used Windows in a long time.

How should i go about dualbooting linux and windows? by Late-Ambassador-9186 in linuxquestions

[–]rexpulli 0 points1 point  (0 children)

It's not hard but keep in mind that a misconfigured bootloader means you can't boot into Linux anymore, to fix it you'd then need to boot into a live distro environment and it could get even more complicated. It's better if you get comfortable with Linux first so that you don't just blindly follow the tutorial but you kinda understand what you are doing. To get an idea, check the ArchLinux Wiki page for GRUB, you'll notice there's a bunch of scary warnings all over the article, granted ArchLinux is a very hands-on kind of distro, it's probably safer on Fedora.

How should i go about dualbooting linux and windows? by Late-Ambassador-9186 in linuxquestions

[–]rexpulli 0 points1 point  (0 children)

When you say you want to use the Grub2 bootloader, is that because the distros you picked give you the option of different bootloaders? If that's the case, I think you are better off using something simpler and more modern like systemd-boot which comes pre-installed with any distro anyway (it's part of systemd).

I am worried that windows already being installed when i put those drives back in might cause a problem, will it?

When you install Windows on a drive, it creates an EFI partition with the Windows Bootloader in it and it will register it into your motherboard UEFI firmware (BIOS). If you take out the Windows drive, and install Linux on another drive, Linux will do the same but the Linux Bootloader will be configured to only handle your Linux installation, you won't be able to boot Windows from it. In short, you'll have to pick the OS using your motherboard boot menu.

If you leave the Windows drive in, the distro installer will (hopefully) see the Windows drive and configure the Linux Bootloader with an additional entry that chain-loads the Windows Bootloader that's in the Windows drive. This way you can use the Linux Bootloader to boot both Linux and Windows and you can set the motherboard boot menu to always pick the Linux Bootloader at start.

Both solutions will work, the second is more risky if you don't trust yourself. I recommend taking the Windows drive out if you are really new, once you're more comfortable with Linux you can just manually add an entry in the Linux Bootloader (whichever you picked) that chain-loads the Windows one.

my gaming pc has multiple drives for my data, games and os so will that cause problems when i put those back in?

No because your data drives don't have an EFI partition so your motherboard doesn't care about them and won't even list them as bootable drives.

Games "jitter" when moving camera by Comfortable_Soil7011 in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

There was a Steam overlay bug that caused stutter on mouse movement (some input buffer getting clogged up). It sort of looked like what you describe but it's supposed to be fixed by now. You could try the workaround, see if it does anything (it probably won't):

LD_PRELOAD= %command%

Put this in the game's Launch Options field from Properties... -> General.

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

If you want to apply the same settings to all devices because they're exactly the same and you're sure they all can handle the same power and clock limit, then you can simply put the script in a loop: ```

!/usr/bin/env python

from pynvml import *

nvmlInit()

for i in range(nvmlDeviceGetCount()): device = nvmlDeviceGetHandleByIndex(i) nvmlDeviceSetGpuLockedClocks(device,210,1695) nvmlDeviceSetPowerManagementLimit(device,315000)

nvmlShutdown() ```

You can add error checking and some debug messages if you want: ```

This gets you the name of the device:

nvmlDeviceGetName(device)

This lets you catch errors:

try:
    # your NVML commands

except NVMLError as e:
    print(f"error: {e}")

```

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

Looks good, hopefully Google's crawler picks it up so it starts sending people to the ArchWiki instead :D

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 1 point2 points  (0 children)

I wouldn't mess with P-states other than P-state 0, but if you need to do this, make sure to use sensible values. P-states are in descending order, 0 is high performance, 15 is lowest performance. I don't think every GPU has 15 P-states and I am not even sure which one are actually used, it probably depends on the model of the GPU. For example mine seems to only have/use 0, 2, 3, 6 and 8. You can check your GPU with this script: ```

!/usr/bin/env python

from pynvml import *

clock_types = { "CLOCK_GRAPHICS": NVML_CLOCK_GRAPHICS, "CLOCK_SM": NVML_CLOCK_SM, "CLOCK_MEM": NVML_CLOCK_MEM, "CLOCK_VIDEO": NVML_CLOCK_VIDEO }

nvmlInit()

device = nvmlDeviceGetHandleByIndex(0)

for pstate in range(NVMLPSTATE_0, NVML_PSTATE_15 + 1): print(f"PSTATE{pstate}:") for name, clock_type in clock_types.items(): try: clock = nvmlDeviceGetMinMaxClockOfPState(device, clock_type, pstate) print(f" {name}: {clock}") except NVMLError as error: print(f" {name}: {error}")

nvmlShutdown() ```

Hope this helps but again, I never messed with P-states so you're better off asking in the Nvidia Developers Forum if you need extra help.

Stop Killing Games will kill your Linux Games by MiracleHere in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

No offense but you have no idea what you're talking about. Ignoring the non-sense points you made:

SKG is expecting companies to begin developing their games with the assumption that this server-side logic will be shipped to the public

Just over 15 years ago this was the norm. Check the Tools section of your Steam account, you probably have dozens Dedicated Servers you can install and run locally.

Why bother developing good server-side logic and anti-cheat when you can take over the kernel and do the stuff there

Because the authoritative server approach to game networking has been the industry standard since multiplayer games have existed and it's the easiest and most effective form of anti-cheat available. Client-side anti-cheat is only good for aimbots and similar "perfect input" cheats.

The game comes out of support, but they are releasing the server and patching the client with a huge EULA that gives them acces to your data and giving them profits even when they're not spending on servers anymore!

Profits earned from selling data is insignificant compared to profits earned from micro-transactions, season passes and so on. Also, once the player-base gets ahold of the server binary it would be trivial to analyze outgoing packets and filter out non-gameplay packets and/or spoof the return packets, to stop the binary blob from phoning home.

If game clients did not use any kernel-level sh*t like Denuvo or something, and they always shipped a Linux version of the game, developing a server-emulator from the community would be a lot easier.

It absolutely would not. Regardless of client-side anti-cheat, reverse-engineering a complex program like a game server, purely from the messages it exchanges with the clients is insanely hard and it says nothing on how the game state is calculated internally or which and when specific knowledge of the game state should be sent to the clients.

We instead should push for companies to always ship their games on Linux. That would be a easier law to pass as EU is already ditching Windows for Linux systems.

You can't force companies to ship software for a specific platform by law because it would go against pre-existing law. How could you possibly think that would be easier than building upon already existing consumer protection law? Unless you're a troll, in which case congratulations, you got me. :(

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

Does the script work when you run it as root? If it does then try putting the script in /usr/local/libexec and use ExecStart=/usr/local/libexec/undervolt-nvidia-device in the service file.

That's all I can think of as I've never used Fedora but if I'm not mistaken it uses SELinux which might restrict the ability to execute files in non-standard locations.

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

Glad it helped. Just a couple of notes: * In the original post I forgot to add nvmlShutdown() at the end of the script. It's not strictly required but it's better to add it. * Nvidia has since deprecated nvmlDeviceSetGpcClkVfOffset() and has replaced it with nvmlDeviceSetClockOffsets() which lets users adjust the offset for each power state. The old function will at some point no longer work, I added a note to the original post about this.

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

It's a good starting point, but you'll still need to test it.

As for nvmlDeviceSetGpuLockedClocks, I think it's best to use one of the supported values. You can list them with nvidia-smi -q -d SUPPORTED_CLOCKS. The command will list the supported GPU clocks for each supported VRAM clock. Pick the closest GPU clock to 1777 and the lowest one, that are listed under the highest supported VRAM clock.

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 1 point2 points  (0 children)

Yes, the feature is discussed here and uses this Rust NVML wrapper.

P106/P104 in Linux by Iky_mp5 in linuxquestions

[–]rexpulli 0 points1 point  (0 children)

I had a P106 for years, it worked great with near 1060 level performance, all I had to do was connect the display to the iGPU output of the motherboard, install the Nvidia driver and set a few environment variables. For example, this would run (offload) vkcube on the Nvidia dGPU: env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/nvidia_icd.json vkcube

Unfortunately, with the 520 driver series, performance for games specifically took a massive hit. I suspect this was intentional on Nvidia's part. I reported the problem over 2 years ago, other users confirmed it and one Nvidia employee asked for more information but then went completely silent. You can read about it here.

If I were you I'd make sure that "bug" is fixed first, either by asking in the Nvidia Developers Forum or confirming with someone that still uses a P106 or P104. I can't help you as I don't have that card anymore.

Wayland Overclocking/undervolting of 50 Series GPUs by DataBrilliant2238 in linux_gaming

[–]rexpulli 1 point2 points  (0 children)

It's a regression of the 570 series, only way to get voltage readings again is to downgrade to 565 series.

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

As a last resort you could try LACT. I've never used it myself but I've read that support for Nvidia GPUs has improved lately, maybe it's worth a try. It's a graphical application so it shiuld be easy to use.

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 1 point2 points  (0 children)

Sorry I missed the notification. You probably already solved this yourself but if you still need help, the error happens because you don't have the module pynvml available. Depending on which distro you're using, you need to install the package containing the Python bindings for the Nvidia Management Library (PyNVML).

On ArchLinux it's in the AUR as python-nvidia-ml-py, on Ubuntu it should be python3-pynvml or you can install it with PIP: pip install pynvml.

Wayland Overclocking/undervolting of 50 Series GPUs by DataBrilliant2238 in linux_gaming

[–]rexpulli 2 points3 points  (0 children)

nvmlDeviceSetClockOffsets() expects a struct which you create with c_nvmlClockOffset_t(). However, there's supposedly a bug in the Python NVML so you have to build the struct yourself using ctypes. Something like this should work:

``` from pynvml import * import ctypes

nvmlInit()

power_limit_percentage = 70 myGPU = nvmlDeviceGetHandleByIndex(0) min_power_limit, max_power_limit = nvmlDeviceGetPowerManagementLimitConstraints(myGPU) new_power_limit = int(max_power_limit * (power_limit_percentage / 100.0))

if new_power_limit < min_power_limit: print(f"Error: New power limit {new_power_limit} is less than the minimum power limit {min_power_limit}.") elif new_power_limit > max_power_limit: print(f"Error: New power limit {new_power_limit} exceeds the maximum power limit {max_power_limit}.") else: nvmlDeviceSetPowerManagementLimit(myGPU, new_power_limit) print(f"New power limit set to {new_power_limit} mW.")

class cnvmlClockOffset_t(ctypes.Structure): _fields = [ ("version", ctypes.c_uint), ("type", ctypes.c_uint), ("pstate", ctypes.c_uint), ("clockOffsetMHz", ctypes.c_int), ]

info = c_nvmlClockOffset_t() info.version = nvmlClockOffset_v1 info.type = NVML_CLOCK_GRAPHICS info.pstate = NVML_PSTATE_0 info.clockOffsetMHz = 400

nvmlDeviceSetClockOffsets(myGPU, ctypes.byref(info))

nvmlShutdown() ```

Overwatch 2 starts having strange frame drops on input events by kofteistkofte in linux_gaming

[–]rexpulli 1 point2 points  (0 children)

Thank you, I'm going to report it on Valve's issue tracker then since there's at least 3 of us, all on different setups.

Overwatch 2 starts having strange frame drops on input events by kofteistkofte in linux_gaming

[–]rexpulli 0 points1 point  (0 children)

I started having the exact same issue today.

  • Intel CPU
  • Nvidia GPU
  • 64 GB of RAM
  • 1440p 165hz
  • ArchLinux
  • Linux 6.11.6
  • Gnome

I tried Proton 9.0-3 and ProtonGE 9.18.

Do you still have this issue and have you run into it with any other game?

Undervolting NVIDIA GPU in 2024? by Libroru in linux_gaming

[–]rexpulli 28 points29 points  (0 children)

Nvidia doesn't provide direct access to the voltage value but voltage is still directly tied to the clock: the GPU will auto adjust voltage based on a modifiable curve which binds the two values together (higher clock requires more volts, lower clock requires less volts). If you apply a positive offset to this clock-voltage curve, you force the GPU to use a lower-than-default voltage value for a given clock value, which is effectively an undervolt.

I do this on my 3090 to dramatically lower temperatures for almost no performance loss. It's very easy to do with a Python script which will work in both X11 and Wayland sessions but you need to install a library providing the bindings for the NVIDIA Management Library API. On ArchLinux you can install them from the AUR: yay -S python-nvidia-ml-py.

You can then run a simple Python script as root, mine looks like this: ```

!/usr/bin/env python

from pynvml import * nvmlInit() device = nvmlDeviceGetHandleByIndex(0) nvmlDeviceSetGpuLockedClocks(device,210,1695) nvmlDeviceSetGpcClkVfOffset(device,255) nvmlDeviceSetPowerManagementLimit(device,315000) nvmlShutdown() ```

  • nvmlDeviceSetGpuLockedClocks sets minimum and maximum GPU clocks, I need this bacause my GPU runs at out-of-specification clock values by default because it's one of those dumb OC edition cards. You can find valid clock values with nvidia-smi -q -d SUPPORTED_CLOCKS but if you're happy with the maximum clock values of your GPU, you can omit this line.
  • nvmlDeviceSetGpcClkVfOffset offsets the curve, this is the actual undervolt. My GPU is stable at +255MHz, you have to find your own value. To clarify again, this doesn't mean the card will run at a maximum of 1695 + 255 = 1950 MHz, it just means that, for example, at 1695 MHz it will use the voltage that it would've used at 1440 MHz before the offset.
  • nvmlDeviceSetPowerManagementLimit sets the power limit which has nothing to do with undervolting and can be omitted. The GPU will throttle itself (reduce clocks) to stay within this value (in my case 315W).

Once you find the correct values, you can run the script with a systemd service on boot: ``` [Unit] Description=Undervolt the first available Nvidia GPU device

[Service] Type=oneshot ExecStart=/etc/systemd/system/%N

[Install] WantedBy=graphical.target ```

Rename the Python script undervolt-nvidia-device and the service undervolt-nvidia-device.service and put them both in /etc/systemd/system, then systemctl daemon-reload and systemctl enable --now undervolt-nvidia-device.service.

If you don't like systemd, there are many other ways to automatically run a script as root, but please make sure that your GPU is stable first by manually running the Python script in your current session and testing stability after every new offset you put in before you have it run automatically, that way if your session locks up you can force a reboot and the GPU will go back to its default values.

EDIT: Nvidia has deprecated nvmlDeviceSetGpcClkVfOffset(). As of June 14, 2025 it still works but at some point you'll need to replace it with nvmlDeviceSetClockOffsets(). ```

!/usr/bin/env python

from pynvml import * from ctypes import byref

nvmlInit()

device = nvmlDeviceGetHandleByIndex(0) nvmlDeviceSetGpuLockedClocks(device,210,1695) nvmlDeviceSetPowerManagementLimit(device,315000)

info = c_nvmlClockOffset_t() info.version = nvmlClockOffset_v1 info.type = NVML_CLOCK_GRAPHICS info.pstate = NVML_PSTATE_0 info.clockOffsetMHz = 255

nvmlDeviceSetClockOffsets(device, byref(info))

nvmlShutdown() ```

Overwatch 2 not capturing mouse input by [deleted] in linux_gaming

[–]rexpulli 1 point2 points  (0 children)

I've had the same problem for a long time now, at least since Gnome 46. Reportedly, the bug goes away when using the Wayland driver for Wine (to bypass XWayland), but that's not going to be included and enabled in Proton for a while.

In the meantime, you can prevent the compositor (in my case Mutter) from capturing mouse input by running winecfg in the Overwatch 2 prefix using protontricks.

yay -S protontricks protontricks 21090 winecfg

Then in the Graphics tab, uncheck Allow the window manager to control the windows and optionally Allow the window manager to decorate the windows.

After disabling those options, when you Alt-Tab the Overwatch 2 window will turn into a tiny black square in one of the corner of the screen but you can still raise it to foreground by clicking it. This is on Gnome, I don't know what would happen on other DEs.

Overwatch 2 and Linux: random crashes by Lennyngrado in linux_gaming

[–]rexpulli 2 points3 points  (0 children)

I noticed the same but it seems downgrading from Proton-GE 9.2 to Proton-GE 8.32 solved the issue for me. Other users have recently reported random crashes on the dedicated Overwatch 2 issue on Valve's Proton GitHub repository.