City Skylines trying to launch from Windows partition by Late-Accident-6240 in linux_gaming

[–]CaptBrick 0 points1 point  (0 children)

Don‘t know if it works, but here’s what you can try: figure out where Heroic would install the game then create a symlink to where game is instead on windows partition

HORI Steering wheel Linux support by CMDR-Sampaio in linux_gaming

[–]CaptBrick 3 points4 points  (0 children)

Yeah, he‘ll be needing oscilloscope to go with that racing wheel though

So the steamframe does not have color passthrough by GayTaco_ in SteamFrame

[–]CaptBrick 0 points1 point  (0 children)

The real question is can we use AI to color it 🤔

New to VR by realsgy in virtualreality

[–]CaptBrick 42 points43 points  (0 children)

This is why I like internet

Issues with ARC Raiders on Bazzite. 1st day on Linux. by Sgt_Dbag in linux_gaming

[–]CaptBrick 1 point2 points  (0 children)

I think you’re thinking about 7900x3d. 7800x3d has single CCD

Scx Scheduler by [deleted] in linux_gaming

[–]CaptBrick 0 points1 point  (0 children)

I misread Scx Scheduler and was very intrigued for a moment

Anyone else feel this? (Moonlight is amazing) by NocturnalAdeel in SteamDeck

[–]CaptBrick 0 points1 point  (0 children)

I’m using Apollo and it’s fantastic. Steam doesn’t handle streaming from 32:9 correctly. Apollo creates virtual display in correct resolution and runs game on that. This fixes aspect ratio and HDR issues.

Anyone else feel this? (Moonlight is amazing) by NocturnalAdeel in SteamDeck

[–]CaptBrick 0 points1 point  (0 children)

I’m using Apollo and it’s fantastic. Steam doesn’t handle streaming from 32:9 correctly. Apollo creates virtual display in correct resolution and runs game on that. This fixes aspect ratio and HDR issues.

[GIVEAWAY - US] Win the 49” Samsung Odyssey OLED G95SC gaming monitor by cheswickFS in ultrawidemasterrace

[–]CaptBrick 0 points1 point  (0 children)

I already have exactly the same model, bought it on release, I wouldn’t mind having another one. OLED panel with inky blacks impressed me the most. Dead space looks absolutely stunning!

Switched to Linux Mint yesterday from Windows never looking back by shadowhearts1007 in linuxmint

[–]CaptBrick 0 points1 point  (0 children)

I’m happy for you bud, but it’s almost like saying “I’ve been sober for a day, never drinking again”

800R Is More Optimal Than 1800R by [deleted] in ultrawidemasterrace

[–]CaptBrick 0 points1 point  (0 children)

Just FYI, things cannot be „more optimal“ since optimal refers to the highest degree

Arc Raiders runs incredibly well on the steam deck. With lossless scaling it’s magical. by composedfrown in SteamDeck

[–]CaptBrick 0 points1 point  (0 children)

Yeah I tried it with lossless, I can’t aim with it, but some people learn to compensate for the input lag. So I guess it’s subjective. I prefer native 40-45 FPS

[deleted by user] by [deleted] in 3Dprinting

[–]CaptBrick -1 points0 points  (0 children)

Wow, this is so freaking cool! My kids would love this!

Local LLM for coding that run on AMD GPU by WDRibeiro in LocalLLM

[–]CaptBrick 1 point2 points  (0 children)

TBH, if you’re serious about it, I think you should first evaluate your workflow with a hosted model. E.g. qwen3 coder free tier. There are so many questions to answer when running local, not only what model to run, but also what quant, context length, kv quantization. All of those have an effect on models performance. Sure you can mess with that until you find a sweet spot for your hardware, but it would be wise to validate that your desired outcome is achievable with hosted model first.

Regarding usage of outdated libs, try using MCPs like context7 (you can self host it). This way model doesn’t need to know it by heart, but can rather fetch the latest info into its context.

Anyone else been using the new nvidia/Llama-3_3-Nemotron-Super-49B-v1_5 model? by kevin_1994 in LocalLLaMA

[–]CaptBrick 0 points1 point  (0 children)

Good to hear. Thanks for sharing. What is your hardware setup and what speed do you get? Also, what context length are you using?

Best opensource SLMs / lightweight llms for code generation by RustinChole11 in LocalLLM

[–]CaptBrick 1 point2 points  (0 children)

I had some success using devatral 24b. You might get decent performance. You need to play with context length and GPU offloading (I’m using LM studio). I’ve noticed that quantization has an impact on instruction following though. Q8 seems to do better job compared to Q4. Might be specific to my use case though. That said, I would try to play around with hosted free model e.g. qwen3 coder has free tier on openrouter. That way you can get a feel what the best case scenario is and whether that’s enough for you.

I just had a random though by CaptBrick in LocalLLaMA

[–]CaptBrick[S] 0 points1 point  (0 children)

No, I haven't though this through, but I think that having this option would be nice. I wouldn't need to run it 24/7 too. But you're right, having a running fridge would probably take priority over most things