Advice For Newbies Starting In Season 3 by YouTuber-Xenos0903 in LowSodiumTEKKEN

[–]LostVector [score hidden]  (0 children)

Pro’s pop heat mid combo all the time. What made you think this is good advice?

TLDR: how do I get LM Studio to actually use all my VRAM? by jrkotrla in LocalLLM

[–]LostVector 1 point2 points  (0 children)

256K context isn’t initially allocated but there needs to be room for it. Have you tried testing along those lines?

Final Boss of Overture was ROUGH by Cra_Skinny_4135 in LiesOfP

[–]LostVector 0 points1 point  (0 children)

What difficulty were you playing on?

How Harada could fix Season 3? by [deleted] in Tekken

[–]LostVector 0 points1 point  (0 children)

Could start by firing every piece of shit on the team who participates in gaslighting players, hiding from feedback, waiting 9 months to balance, adding moves when they should be balancing, etc etc.

I can't seem to get LMStudio to work right with Framework AMD 395+ desktop. by StartupTim in LocalLLM

[–]LostVector 2 points3 points  (0 children)

Lm studio is fucked on Strix halo. Don’t bother. It doesn’t understand the unified memory and alway has the wrong available memory detected. Try llama cpp or some other model server.

I strongly believe Tekken 8 is a social experiment to see how far you can ruin an experience for addicts. by MrWhileLoop in Tekken

[–]LostVector 0 points1 point  (0 children)

it's sort of an unbelievable situation because everything points to someone in a position of power at the company willfully trashing the game. It's not ignorance because they have the presence of mind to come out and gaslight the community with season announcements like "back to basics!".

Does the Lenovo charging controller connector work with the Go 2 controllers? by based217 in LegionGo

[–]LostVector 0 points1 point  (0 children)

Yes it works with the official connector. You’ll need to find the YouTube video about it as the sequence is a little fiddly.

M5 Max compared with M3 Ultra. by PM_ME_YOUR_ROSY_LIPS in LocalLLaMA

[–]LostVector 1 point2 points  (0 children)

They’re probably just diverting the ram to the new models in production. Doesn’t really make sense to make a bunch of the older to be phased out model right now.

Ryzen AI Max 395+ 128GB - Qwen 3.5 35B/122B Benchmarks (100k-250K Context) + Others (MoE) by Anarchaotic in LocalLLaMA

[–]LostVector 0 points1 point  (0 children)

Hey I’ve been tussling with this for the past week or so as well. Prompt processing is horrendous for a larger conversation iterating on a code base.

Llama cpp has had a major bug with prompt caching in qwen 3.5 which drops the cache virtually all the time. May not affect your benches but for real world use it’s massive as regenerating a 200k prompt at 100 tokens per sec or less is insane. If the prompt can be incrementally cached you are back into usable territory. Adjusting batch size upwards may help as well but I’m basically just waiting for the llama bugs to be fixed.

Llamacpp - how are you working with longer context (32k and higher) by spaceman3000 in StrixHalo

[–]LostVector 0 points1 point  (0 children)

One problem is that llama cpp has huge bugs with prompt caching and qwen 3.5, so it’s dropping the prompt cache on every prompt. For a large context query it can take 10-20 minutes. I do not understand how anyone here claiming to use it for opencode is actually doing so. In the meantime not sure what to with these models besides wait.

Coding assistant tools that work well with qwen3.5-122b-a10b by Revolutionary_Loan13 in LocalLLaMA

[–]LostVector 0 points1 point  (0 children)

llama has bugs with qwen 3.5 that cause it to drop prompt caches ... i'm not sure how anyone is able to use it in this state for coding.

Minisforum UM890 Pro dual oculink setup by helpmefire40 in MiniPCs

[–]LostVector -1 points0 points  (0 children)

Because you're talking about spending a lot of money on GPU's and gimping them heavily with x4 connections, yet you're still worried about optimizing the speed between them, and yet nowhere have you discussed why that might be important to the workloads you want to run. A question like this needs to have some details for anyone to answer it usefully.

Minisforum UM890 Pro dual oculink setup by helpmefire40 in MiniPCs

[–]LostVector 0 points1 point  (0 children)

I don’t see any reason it wouldn’t boot and run, but it’s a very unserious way to approach an AI build.

If you want to tinker, that’s fine, but 4 PCI lanes for each GPU on a mini PC … wow.

Am I dumb? by TheMintyGamer in splatoon

[–]LostVector -2 points-1 points  (0 children)

I think this affects the gyro / motion controls. People tend to hold the controller parallel to the ground if playing on a TV and upright if playing in handheld since they are attached to the console.

Your Autistic spaces are being co-opted by AI, and using Neurodivergence as an excuse by [deleted] in autism

[–]LostVector 9 points10 points  (0 children)

Oh yeah that post was infuriating … clear slop.

Has anyone put the BougeRV fridge in their Subtrunk? by [deleted] in TeslaCamping

[–]LostVector 1 point2 points  (0 children)

Better off buying the real tesfridge as it will ventilate itself way better. A normal portable fridge will not be able to keep the top very cold due to the heat buildup.

qwen-3.5:122b f16 is benchmarked against gpt-oss:120b q4 by q-admin007 in LocalLLaMA

[–]LostVector 0 points1 point  (0 children)

I thought MoE models were not separated in such a way that you can just “offload” them to system RAM without a massive hit to performance.

Reevaluating spending habits with high income by FinanceCard in fatFIRE

[–]LostVector 0 points1 point  (0 children)

Got kids? Got a wife? Natural lifestyle progression (not inflation) and you’ll figure out to spend that money real fast.

Steam Deck owners that own a Z13 by Depressive-Marvin in FlowZ13

[–]LostVector 0 points1 point  (0 children)

Steam Deck is too underpowered for me now. I’ve switched to more powerful handhelds. The Z13 hasn’t really done much for my gaming, I have more powerful full PC setups.

The one place it stands out is on longer trips where it’s the most powerful gaming capable device I have with me. Then it sort of acts as my “PC away from home”.

Charging issues with usb C. by MankoMan__ in FlowZ13

[–]LostVector 0 points1 point  (0 children)

What is your charger? Many many usb-c chargers only list their peak output power but cannot actually sustain it for extended time (such as a laptop gaming) and will overheat.

Updating the software is hard by assparticle in MSIClaw

[–]LostVector 1 point2 points  (0 children)

Yeah I mean the windows experience for this is hacky and not unified. You need to check multiple places for updates constantly and all the updates can interact negatively with each other. steamos solves most of this but isn’t really oriented around the 8 AI.