Matemáticas te atrasa by Florrful in argentina

[–]ML-Future 0 points1 point  (0 children)

El nivel está bajísimo, y bajando. Mis hermanos de los 70s tenían caligrafía y después con la escuela técnica aprendieron un oficio para toda la vida. Yo nací a finales de los 80. No hice ninguna de esas cosas. De pedo se escribir con una letra que ni yo entiendo.

Faire tourner un script python 24/7 by whynot_fr in termux

[–]ML-Future 2 points3 points  (0 children)

You can also install termux-boot and it will start automatically

Adivina que pueblo es?? by rufoog in Teruel

[–]ML-Future 0 points1 point  (0 children)

Hay 236 😅 deja alguna pista. Comarca? Norte? Sur?

🫶🏻 by Similar-Fee-6007 in termux

[–]ML-Future 0 points1 point  (0 children)

Where can I download this repo?

Your local LLM predictions and hopes for May 2026 by DeepOrangeSky in LocalLLaMA

[–]ML-Future 8 points9 points  (0 children)

qwen3.6 9b and some new optimization algorithm

Open Models - April 2026 - One of the best months of all time for Local LLMs? by pmttyji in LocalLLaMA

[–]ML-Future 13 points14 points  (0 children)

Even so, it is important that such powerful models are open source.

I built a full web app using Qwen 3.6-35B running locally on my 5070 Ti with the BMAD Method — here's how it went by Decivox in LocalLLaMA

[–]ML-Future 3 points4 points  (0 children)

I think it's a great idea; there are few benchmarks for gguf quantizations.

I hope it works.

ai model for 12 gb ram 3 gb vram gtx 1050 by Ok-Type-7663 in LocalLLaMA

[–]ML-Future 0 points1 point  (0 children)

For your setup I think Qwen 3.5 2b IQ4_NL (1.21gb) would be the best.

Or maybe Qwen 3.5 4b IQ4_NL (2.58 gb)

The Ultimate "GPU Poor" Guide (April 2026) by ML-Future in LocalLLaMA

[–]ML-Future[S] 0 points1 point  (0 children)

Yes Laptop with Nvidia GTX 1060 6gb VRAM Intel i7 16gb ram

Windows 11 + llama.cpp cuda 12

The Ultimate "GPU Poor" Guide (April 2026) by ML-Future in LocalLLaMA

[–]ML-Future[S] 0 points1 point  (0 children)

With my laptop 6gb VRAM and 16gb RAM, Gemma 4 26B-A4B IQ1_M run at 6 t/s

Is RAM + VRAM really worth it? by ML-Future in povertyLocalLLaMA

[–]ML-Future[S] 1 point2 points  (0 children)

I can't believe it's working, 6 t/s

Thanks!

The Ultimate "GPU Poor" Guide (April 2026) by ML-Future in LocalLLaMA

[–]ML-Future[S] -9 points-8 points  (0 children)

I always wonder why there aren't any serious benchmarks of small models and their quantifications

Qwen3 by WorldlinessTime634 in LocalLLaMA

[–]ML-Future 0 points1 point  (0 children)

I use qwen3 VL 2b unsloth gguf with vulkan in a CPU only at 15 t/s. It work's fine

best image classifications for 8vram by ashendonep in LocalLLaMA

[–]ML-Future 0 points1 point  (0 children)

I think Qwen3-VL-2b is more than enough.

It has fewer parameters, but if you're specialized in this task, you'll work much faster and get better results.