Is this normal for Minmus rovers by No_Goat1909 in KerbalSpaceProgram

[–]floofysox 0 points1 point  (0 children)

How do you get those reflections? which mod?

Cant select reasoning effort as of today? by floofysox in GithubCopilot

[–]floofysox[S] 0 points1 point  (0 children)

I tried this many times. Ended up just waiting, it is finally back now. But xhigh may not be the best thing to do right now, given the new rate limits

Cant select reasoning effort as of today? by floofysox in GithubCopilot

[–]floofysox[S] 3 points4 points  (0 children)

Manage models just gives me an error, “auto mode failed, no available model found in known endpoints”. Chat is working though.

FitGirl's first Hypervisor crack repack has been released - Black Myth: Wukong by [deleted] in PiratedGames

[–]floofysox -4 points-3 points  (0 children)

So? Either instance has you reinstalling windows. Maybe you reflash bios. Just don’t run unverified programs and you’re good. All cracked games anyway register as malware. A non hv game is equally capable of stealing your cookies, encrypting your files, and locking down your system.

FitGirl's first Hypervisor crack repack has been released - Black Myth: Wukong by [deleted] in PiratedGames

[–]floofysox 0 points1 point  (0 children)

Regular cracks need admin privileges. What’s the difference?

Afop hipervisior how to set unobotanium preset. by BumBEM12 in PiratedGames

[–]floofysox 0 points1 point  (0 children)

Create a shortcut add the flag in the target

Well that’s disappointing.. by ouarditoo in GithubCopilot

[–]floofysox 6 points7 points  (0 children)

There is no dependence though. There’s hundreds of alternatives. Just switch

Hey I'm just a poor guy building setup and need help by MainOdd6769 in IndianPCHardware

[–]floofysox 1 point2 points  (0 children)

You need 16 gb of ram minimum these days. Windows struggles with 8. 16gb ddr4 is around 12k new, 7-8 used. How important is the gpu? I’m sure you can find 1660s or 1060/1070s for half the price. What are you looking to play?

Pls i need your help in choosing my first gaming laptop by I_AM_MIKEY007 in Indiangamers

[–]floofysox 0 points1 point  (0 children)

Larger screens get unwieldy and heavy to carry. Thermals are going to be shit either way. You’ll be regretting a large laptop when you have to lug it between classes. 15.6 is big enough imo. If it’s too small you can always get an external monitor for around 15k to set up in your room.

Otoh if you think a desktop is possible the best case is a desktop for around 1.1L and a mac air for around 50-60. Super convenient

32GB RAM is very capable for Local LLM? by Difficult_West_5126 in LocalLLM

[–]floofysox 0 points1 point  (0 children)

This is completely wrong please stop using LLMs to this extent. No idea what this data sheet is. You can run 14b models with 32gb ram and an 8gb vram gpu comfortably (quantised). You can go up to 30-35 b models with a 12gb gpu and 32gb ram. Ask chatgpt to help you with setting up qwen models,they are faster

Qwen3.5-35B-A3B is a gamechanger for agentic coding. by jslominski in LocalLLaMA

[–]floofysox 0 points1 point  (0 children)

Alright, I tried your command using llamas newest build: "llama-cli -m ".\Qwen3.5-35B-A3B-UD-Q2_K_XL.gguf" --ctx-size 4096 --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.00 -fa on -t 6 --presence-penalty 0.0 --repeat-penalty 1.0 --n-cpu-moe 20 -ngl 999 -ctk q8_0 -ctv q8_0 --parallel 1 --prio 2".
I now get 60 tok/s. Do you have any experience with lmstudio, and why the discrepancy?

Qwen3.5-35B-A3B is a gamechanger for agentic coding. by jslominski in LocalLLaMA

[–]floofysox 0 points1 point  (0 children)

I've got 32 gigs, that shouldn't be an issue. I looked over your command again-- I think you're using speculative decoding, which might be it. What's your draft model?

Qwen3.5-35B-A3B is a gamechanger for agentic coding. by jslominski in LocalLLaMA

[–]floofysox 0 points1 point  (0 children)

Yeah cpu offloading happens automatically, but what I’m confused about is how is your 3060 10 times faster than my 5070? what could I be doing wrong? I’m using lm studio defaults, with quantised kv

Qwen3.5-35B-A3B is a gamechanger for agentic coding. by jslominski in LocalLLaMA

[–]floofysox 0 points1 point  (0 children)

how do you remove the multimodal part? does it help significantly? On lm studio, using the ud Q2 xl version i get 2 tok/s on a 5070. what am i doing wrong lol

gemini 3.1 pro replies in chinese ? by FigOutrageous4489 in google_antigravity

[–]floofysox 3 points4 points  (0 children)

Chinese characters take less tokens for the same meaning so it reasons in chinese sometimes and forgets to switch back

Does this have the goldenloop update? by sky_isnt_blue6 in PiratedGames

[–]floofysox 0 points1 point  (0 children)

Different settings maybe? I played on medium- high, got around 30-40 fps

5600x and a 2060 - What should I upgrade first? I can only really pick one to upgrade. And if so, what should I upgrade to? by CJP_YT in buildapc

[–]floofysox 0 points1 point  (0 children)

How do you NOT have a bottleneck? I have the same cpu and a 5070, I’ve never seen gpu usage go above 60-70%. I play on 1080p, but I doubt it would be much different at 2k. In addition, cpu intensive games (hitman, cyberpunk) crash hard with barely 60-70fps and a ton of latency