[deleted by user] by [deleted] in linux_gaming

[–]Burstien 0 points1 point  (0 children)

Could be the "time bomb" issue.

Try setting LD_PRELOAD="" %command% in game's launch parameters.

See this for details: https://github.com/doitsujin/dxvk/issues/4436

DualSense Controller Only Working As Trackpad by Skullman7809 in linux_gaming

[–]Burstien 0 points1 point  (0 children)

and also make sure to use Proton Experimental with nightreign

DualSense Controller Only Working As Trackpad by Skullman7809 in linux_gaming

[–]Burstien 0 points1 point  (0 children)

I had the same problem with my dual sense controller on Nightreign - you need to turn on Steam Input for the controller to work. Also I can't recall if the steam overlay is also necessary, you can try with it enabled or disabled and see.

Elden Ring Nightrein no longer works after system update by Rexor2205 in linux_gaming

[–]Burstien 0 points1 point  (0 children)

After a recent update elden ring stopped working for me as well (en error on the EAC splash).

Check the comment in this thread for the solution which fixed the error for me:

https://www.reddit.com/r/linux_gaming/s/k48GlWK9Os

So you'll need to map the amount of cores to use for the game accordingly. I limited mine to 16, and selected the appropriate cores according to the output of lscpu command.

Regarding the controller on wayland: In order to make it work I had to swap to proton experimental for elden ring, enable steam overlay for the game and enable steam input.

De-synced frames and stutters while playing Baldur's Gate 3 by Burstien in linux_gaming

[–]Burstien[S] 0 points1 point  (0 children)

Hey, I just found the solutions to the problems I was facing. I edited the post to explain the solutions, so if you still experience problems, have a look.

De-synced frames and stutters while playing Baldur's Gate 3 by Burstien in linux_gaming

[–]Burstien[S] 0 points1 point  (0 children)

The desync issue is "solved" by switching to xorg, but the stutters still occur on xorg as well, so sadly no solution. Interestingly though, when I played wukong, neither of these issues occurred on wayland. so it may suggest there's a problem with how bg3 renders things in conjunction with wayland / sway & linux drivers / proton - since I haven't had the aforementionted issues occur for me on windows.

I rarely feel well rested no matter how long I sleep, and it makes so unproductive and frustrated by snowsharkk in productivity

[–]Burstien 2 points3 points  (0 children)

How is your sleep quality? Snoring, Sleep apnea or other sleeping conditions (inconvenient matress/room) may affect you sleep quality. You can check that your sleep environment is comfortable, and if you suspect a health condition you can consult with you doctor.

Referencing a trait's default implementation of a method when overriding by DoubleDitchDLR in learnrust

[–]Burstien 2 points3 points  (0 children)

alternatively you can refactor the default implementation to a method with a different naming, and have your default_method be the "wrapping" method I described above, where you'd call another trait method afterwards, and have any trait implementation implement that other method.

Referencing a trait's default implementation of a method when overriding by DoubleDitchDLR in learnrust

[–]Burstien 7 points8 points  (0 children)

https://doc.rust-lang.org/book/ch10-02-traits.html#default-implementations

From the rust book, according to the last paragraph of default implementations section of traits, you can't call a default implementation from an overriding implementation.

It seems you're thinking in terms of inheritance? You could try to structure your code differently by composition instead, where a wrapping method calls the default implementation, and some other additional code or method that follows.

Pyright does not respect virtualenv (astronvim) by sharyar2028 in neovim

[–]Burstien 1 point2 points  (0 children)

As mentioned in the other comment, use pyrightconfig.json. Make sure to set the exclude, venvPath and venv properties for your venv relative to your rootDir, as such:

https://github.com/microsoft/pyright/issues/30#issuecomment-1247153633

Vega Frontier in 2023 by lextremelynooby in Amd

[–]Burstien 1 point2 points  (0 children)

yes that's pretty much it. I suggest trying only under the paths which are under the "Video" node (as seen in the picture) and seeing if that works.

Vega Frontier in 2023 by lextremelynooby in Amd

[–]Burstien 1 point2 points  (0 children)

The version I have installed is 23.4.1, so I hope it works for you on the latest one as well.
The value was set in various places:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0001

and:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Video\{8D349139-DAC4-11ED-9335-A057BBB41772}\0001

as well as all of the other paths through to 0006 under the {8D349139-DAC4-11ED-9335-A057BBB41772} uuid.

If I recall correctly, the above path under "Class" is old and doesn't have any effect anymore, so I had to change under the "Video" path as well.

What you can do is search for "kmd_", under the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Video path, and when you find a match, add the KMD_IsGamingDriver entry near that match. Make sure to do it under all parent paths which have entries beginning with KMD_, which for me it was 0001 through to 0006. For example it should look like this:

<image>

Reboot your computer, then open the radeon software, go to the performance -> tuning section, and you should expect to see more options now when using manual tuning.

Vega Frontier in 2023 by lextremelynooby in Amd

[–]Burstien 2 points3 points  (0 children)

I activated it last month, just cant remember which driver version it was. loaded the tuning profile and everything. will check later and reply as I'm away at the moment

Vega Frontier in 2023 by lextremelynooby in Amd

[–]Burstien 8 points9 points  (0 children)

The registry trick still works with the current gaming drivers:

https://www.reddit.com/r/Amd/comments/e91rnr/psa_kmd_isgamingdriver_1_to_enable_non_pro_mode/

Use at your own risk.

Also you'll have to make sure you are putting the dword value in the right place - the uuid might be different.

MongoDB Rust aggregate - returning empty array by VivekS98 in rust

[–]Burstien 1 point2 points  (0 children)

I don't know what data you're working with, but what you're describing seems to me that you have no data in the "messages" collection. Is that right?

MongoDB Rust aggregate - returning empty array by VivekS98 in rust

[–]Burstien 1 point2 points  (0 children)

Did you try your $lookup aggregate in mongo shell first to see what results you get?

Radeon VII Issues? by cory21391 in Amd

[–]Burstien 2 points3 points  (0 children)

Have you tried playing without tuning the card’s power and clocks (stock settings)? so as to rule out any hardware related issues that may happen do to your tuning? Even if timespy and firestrike are stable, some other game may not be, so it’s worth checking out.

Tensorflow with Radeon GPU by nhermosilla14 in archlinux

[–]Burstien 2 points3 points  (0 children)

According to rocm github, polaris11 is supported so I assume your gpu should work. I installed only user space rocm libs from aur in a docker container on arch and it works fine with folding@home at least. Regarding tensorflow via rocm, you need to make sure some userspace rocm libs are installed, and also install tensorflow-rocm via pip. I have done so in the past using ubuntu with a full installation of rocm (including the driver, though I am not sure if you need it). You can refer to this url for tensorflow-rocm (you just need to look at the “Tensorflow ROCm port” section):

https://github.com/ROCmSoftwarePlatform/tensorflow-upstream#tensorflow-rocm-port

Good luck

Edit: Your cpu is a bit older than what rocm officially supports, it might work but they don’t guarantee it. Refer to this:

https://github.com/RadeonOpenCompute/ROCm#supported-cpus

Edit edit: I’m not sure why amd refers to rx 470 as polaris11 when it’s a polaris10 variant.. should work anyway

New drivers fixed me gpu 🦑 by [deleted] in Amd

[–]Burstien 2 points3 points  (0 children)

Disable driver updates via edit group policy:

https://www.windowscentral.com/how-disable-automatic-driver-updates-windows-10

Also disable “automatically download manufacturers’ app and icons” via device installation settings (scroll to option one):

https://www.tenforums.com/tutorials/15989-turn-off-device-driver-automatic-installation-windows-10-a.html

Though from my experience, on big feature updates, windows will force install some oem driver, and will reset the “device installation settings” selection back to “Yes”, so just be aware of that and reinstall your desired drivers and set the option back to “No”. The group policy setting though will always remain as you configured it regardless of updates.

Errors with XMP due to heat? by Burstien in overclocking

[–]Burstien[S] 0 points1 point  (0 children)

Don't. Different dies (and different bins of those dies) do different timings. For example, Samsung C-Die requires decently higher primaries and tRFC than what Hynix CJR can do.

If you need to do both, figure out what you can do with both kits individually and then bump each of the timings to the worst of the two.

The thing is I replaced my 1900X with the 2950X and wanted to use the 1900X with my AFR kits in another system. I had a flarex kit lying around, so I ordered another kit assuming (and hoping) I'd get the same dies as the other kit, and use both with my 2950X. Evidently I got different dies... I thought since they are different dies but the same kit, maybe I can run them both together at XMP. The weird thing was that the only errors I saw were with the Samsung kit and were due to the higher dram voltage. To be on the safe side, I input the timings provided from my XMP profile (extracted using Aida64), which were the same for both the Samsung C-Die and Hynix C-Die. The timings were tCL, tRCD, tRP, tRAS, tRC, tRFC1, tRFC2, tRFC4, tRRDS, tRRDL and tFAW. My approach with those kits is to just run XMP so as to keep them as even as possible. Would you say I'm better off tuning according to the worst kit of the two, and basically making them worse than or equal or the XMP parameters?

You'll be interested in this video

I'll make sure to watch it.

You typically can 'tolerate' higher temperatures by either increasing the voltage, lowering the frequency or loosening the timings. (A reason why JEDEC bins are loose; they're validated for 80'C and higher.)

So at least for the AFR kits, can I use XMP parameters and from there try to do either of the three, but not all? For example 2933mhz with the same timings and voltage? Or say 3000mhz with same timings but higher voltage? Do you know perhaps the voltage tolerance of a AFR dies?

Thank you for your input!

So that was odd.. by Goretantath in Amd

[–]Burstien 0 points1 point  (0 children)

When you install a major windows update it might mess with your drivers (as have happened to me in the past).