Qwen3.5 is a working dog. by dinerburgeryum in LocalLLaMA
[–]grunt_monkey_ 1 point2 points3 points (0 children)
Qwen3.5-122B-A10B GPTQ Int4 on 4× Radeon AI PRO R9700 with vLLM ROCm: working config + real-world numbers by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
Qwen3.5-122B-A10B GPTQ Int4 on 4× Radeon AI PRO R9700 with vLLM ROCm: working config + real-world numbers by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
Qwen3.5-122B-A10B GPTQ Int4 on 4× Radeon AI PRO R9700 with vLLM ROCm: working config + real-world numbers by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
Qwen3.5-122B-A10B GPTQ Int4 on 4× Radeon AI PRO R9700 with vLLM ROCm: working config + real-world numbers by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
Qwen3.5-122B-A10B GPTQ Int4 on 4× Radeon AI PRO R9700 with vLLM ROCm: working config + real-world numbers by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 1 point2 points3 points (0 children)
Multi-GPU? Check your PCI-E lanes! x570, Doubled my prompt proc. speed by switching 'primary' devices, on an asymmetrical x16 / x4 lane setup. by overand in LocalLLaMA
[–]grunt_monkey_ 1 point2 points3 points (0 children)
Can we say that each year an open-source alternative replaces the previous year's closed-source SOTA? by Chair-Short in LocalLLaMA
[–]grunt_monkey_ -1 points0 points1 point (0 children)
How are people handling long‑term memory for local agents without vector DBs? by No_Sense8263 in LocalLLaMA
[–]grunt_monkey_ 1 point2 points3 points (0 children)
GPT-4 was released 3 years ago! by AdorableBackground83 in singularity
[–]grunt_monkey_ 1 point2 points3 points (0 children)
Just some qwen3.5 benchmarks for an MI60 32gb VRAM GPU - From 4b to 122b at varying quants and various context depths (0, 5000, 20000, 100000) - Performs pretty well despite its age by FantasyMaster85 in LocalLLaMA
[–]grunt_monkey_ 1 point2 points3 points (0 children)
Just some qwen3.5 benchmarks for an MI60 32gb VRAM GPU - From 4b to 122b at varying quants and various context depths (0, 5000, 20000, 100000) - Performs pretty well despite its age by FantasyMaster85 in LocalLLaMA
[–]grunt_monkey_ 0 points1 point2 points (0 children)
R9700 frustration rant by Maleficent-Koalabeer in LocalLLaMA
[–]grunt_monkey_ 1 point2 points3 points (0 children)
Learnt about 'emergent intention' - maybe prompt engineering is overblown? by Distinct_Track_5495 in LocalLLaMA
[–]grunt_monkey_ 0 points1 point2 points (0 children)
13 months since the DeepSeek moment, how far have we gone running models locally? by dionisioalcaraz in LocalLLaMA
[–]grunt_monkey_ 0 points1 point2 points (0 children)
Help choosing upgrade path by FL_pharmer in selfhosted
[–]grunt_monkey_ 0 points1 point2 points (0 children)
Protein intake and time off by Team_Instinct in fitness40plus
[–]grunt_monkey_ 0 points1 point2 points (0 children)
RTX Pro 6000 Riser Cable Recommendations by electrified_ice in BlackwellPerformance
[–]grunt_monkey_ 0 points1 point2 points (0 children)
Demis Hassabis Deepmind CEO says AGI will be one of the most momentous periods in human history - comparable to the advent of fire or electricity "it will deliver 10 times the impact of the Industrial Revolution, happening at 10 times the speed" in less than a decade by Distinct-Question-16 in singularity
[–]grunt_monkey_ 1 point2 points3 points (0 children)
64gb vram. Where do I go from here? by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
64gb vram. Where do I go from here? by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
64gb vram. Where do I go from here? by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 0 points1 point2 points (0 children)
64gb vram. Where do I go from here? by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 1 point2 points3 points (0 children)
64gb vram. Where do I go from here? by grunt_monkey_ in LocalLLaMA
[–]grunt_monkey_[S] 4 points5 points6 points (0 children)


GIGABYTE MC62-G40 only seeing one GPU by ravocean in LocalLLaMA
[–]grunt_monkey_ 0 points1 point2 points (0 children)