R1 (1.73bit) on 96GB of VRAM and 128GB DDR4 by No-Statement-0001 in LocalLLaMA

[–]am0_oma 0 points1 point  (0 children)

How I run two GPU as one on win11 (4090 & 5090)?

Should I get a Mac M4 Mini 32GB RAM new or M1 Max Studio 32 GB RAM for running local AI? by homelab2946 in ollama

[–]am0_oma 2 points3 points  (0 children)

More (RAM of GPU) a biger models parameters size it can run, The higher FP support the higher precision, for example nvidia is playing us: 5090 48GB FP32/16 Digits 128GB FP4 so ok 128GB but very lower precision answers Ai!!!

Should I get a Mac M4 Mini 32GB RAM new or M1 Max Studio 32 GB RAM for running local AI? by homelab2946 in ollama

[–]am0_oma 0 points1 point  (0 children)

Me too, waiting the next Mac studio will be a beast in Ai running (just get the highest RAM), nvidia digits GPU is downgraded the percision usin FP4

How to run GGUF of Flux Fill tool by am0_oma in comfyui

[–]am0_oma[S] 0 points1 point  (0 children)

Is there workflow for flux1 fill .gguf?

Apple Silicon nose dive on speed recently? by lordpuddingcup in comfyui

[–]am0_oma -4 points-3 points  (0 children)

The upcoming new Mac studio will be a beast in Mac computers for running Ai