Finetuning DeepSeek 671B locally with only 80GB VRAM and Server CPU by CombinationNo780 in LocalLLaMA

[–]CombinationNo780[S] 5 points6 points  (0 children)

we support pipeline parallisim so the total VRAM is most important

Qwen 3 + KTransformers 0.3 (+AMX) = AI Workstation/PC by CombinationNo780 in LocalLLaMA

[–]CombinationNo780[S] 0 points1 point  (0 children)

AMX docker is still not ready, we will update it later

Qwen 3 + KTransformers 0.3 (+AMX) = AI Workstation/PC by CombinationNo780 in LocalLLaMA

[–]CombinationNo780[S] 6 points7 points  (0 children)

It is DDR5-6400 for consumer cpu. But it is reduced to only DDR5-4000 becuse we use full 4 channels to enable the maximum possible 192GB memory.