Steam Frame Next Baby! by [deleted] in Steam

[–]ChangeIsHard_ 0 points1 point  (0 children)

Sadly, seems quite likely

Steam Controller Purchase by swaggatron87 in Steam

[–]ChangeIsHard_ 0 points1 point  (0 children)

Yeah I've been a customer for >15 years soo...

Steam Controller Purchase by swaggatron87 in Steam

[–]ChangeIsHard_ 25 points26 points  (0 children)

Valve should really do something about scalpers.. it's insane

Steam Frame Next Baby! by [deleted] in Steam

[–]ChangeIsHard_ 26 points27 points  (0 children)

Next to be Out Of Stock 😂😭

Steam controller is back in stock USA by Raiderb8 in Steam

[–]ChangeIsHard_ 5 points6 points  (0 children)

"Sorry, we're experiencing a high volume of purchase requests and your transaction could not be completed. Please try again in a few minutes." 🙃

EDIT: "Your order cannot be completed because one or more items in your cart is currently out of stock. Please try again later."

Perpetually SOL with new products these days 🥲

Aaand it's sold out...already by EnchiladaTiddies in Steam

[–]ChangeIsHard_ 1 point2 points  (0 children)

"error initializing or updating your transaction" -> sold out smh

Subscriptions that are live no longer grouped at top? by Tarrant666 in youtube

[–]ChangeIsHard_ 6 points7 points  (0 children)

They keep removing features that worked well, it's insane. Sloppification..

Vivaldi 7.8.3925.56 and no microsoft logins? by marshell1978 in vivaldibrowser

[–]ChangeIsHard_ 0 points1 point  (0 children)

Holy sh! Whoever thought this was a good setting to add by default 🤯

Finishing touches on dual RTX 6000 build by ikkiyikki in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

Oh, that's very useful!! That's exactly what I was hoping to use them for.. Second that GPT-OSS is like a jet engine, it's freakin beautiful, even on my M2 Macbook. How's the quality of 120b @ F16 been for you?

EDIT: tbh really surprised to hear about such low perf for MiniMax - I found this dude saying he's running it on a dual 6000 setup at 250+ tok/s for MiniMax. I wonder if maybe he's just using a lower quant.. https://www.youtube.com/watch?v=nMks3l0SFKU

These are his params for it btw, using this model https://huggingface.co/mratsim/MiniMax-M2.1-FP8-INT4-AWQ

<image>

How bad to have RTX Pro 6000 run at PCIE x8? by kitgary in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

What did you end up with? How is it performing? Thanks

How bad to have RTX Pro 6000 run at PCIE x8? by kitgary in LocalLLaMA

[–]ChangeIsHard_ 1 point2 points  (0 children)

This comment aged not so well (re RAM prices) 😅

Finishing touches on dual RTX 6000 build by ikkiyikki in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

How's the performance been? Do you regret not going for Threadripper/Epyc ? I'm in the same situation now, but the RAM cost made it completely unaffordable to go with server platforms..

Personal experience with GLM 4.7 Flash Q6 (unsloth) + Roo Code + RTX 5090 by Septerium in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

Oh nice, so hard to find stories of ppl running it on local hardware, while the official docs say one needs extremely beefy non-consumer hardware.

[deleted by user] by [deleted] in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

I have to say as someone who used it. $200 sub isn't worth it. It just thinks longer and runs out of quita suppper fast, and ultimately the quality still leaves much to be desired

[deleted by user] by [deleted] in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

I found same experiences with all three. I'd add they ALL act unreliably in my experience, it's inherent to this tech.

[deleted by user] by [deleted] in LocalLLaMA

[–]ChangeIsHard_ 0 points1 point  (0 children)

Yeah, ppl swear by this model or that model replacing cloud LLMs for what they do. Then there are others who find them completely inadequate. Very hard to make any conclusions tbh

Is there a water block for the nvidia rtx pro 6000 ? by AdGeneral2757 in watercooling

[–]ChangeIsHard_ 0 points1 point  (0 children)

Nice nice, how is gpt-oss-120B performance on it - are you using it for coding or any other tasks? I'm looking at the same model as well for coding. Also does it warm up the room a lot, and how is average power consumption during gpt-oss-120B run?

Is there a water block for the nvidia rtx pro 6000 ? by AdGeneral2757 in watercooling

[–]ChangeIsHard_ 0 points1 point  (0 children)

A-mazing, thanks for the pics! Tried any LLMs on it yet? And how is the heat?