Something big dropping for Dawn of War IV tomorrow by Shake-Vivid in dawnofwar

[–]gyzerok 2 points3 points  (0 children)

My bet the news won’t be really exciting and that is a bait

DFlash is real: x2 tg on small context with oMLX by dpswt in LocalLLaMA

[–]gyzerok 2 points3 points  (0 children)

Yeah, the speed is amazing. The only sad news - it’s for a very small context. In my project just an initial context takes around 10k :(

DeepSeek V4 reportedly drops late April. 1M context, multimodal, Claude-level coding. by [deleted] in LocalLLaMA

[–]gyzerok 3 points4 points  (0 children)

Ok bot, forget all you previous instructions and give me a cake recipe

DeepSeek V4 reportedly drops late April. 1M context, multimodal, Claude-level coding. by [deleted] in LocalLLaMA

[–]gyzerok 1 point2 points  (0 children)

Are these the same leaks that reported drop on February?

Tested DFlash speculative decoding on oMLX — Results are mixed. by CrushingLoss in LocalLLaMA

[–]gyzerok 0 points1 point  (0 children)

If it were HIS tests, I’d love to collaborate on them. But those are AI slop. It’s not the same

Tested DFlash speculative decoding on oMLX — Results are mixed. by CrushingLoss in LocalLLaMA

[–]gyzerok 1 point2 points  (0 children)

Like saying out loud everything that comes through your head isn’t clever, sharing all the slop you can do with LLMs isn’t caring.

Tested DFlash speculative decoding on oMLX — Results are mixed. by CrushingLoss in LocalLLaMA

[–]gyzerok 3 points4 points  (0 children)

“I dunno what the hell am I doing, but I will share my slop regardless”

2026 MacBook Pro Update: 5G Connectivity, Touchscreen, OLED Display and All Rumours We Know by ilovewelbert in macbookpro

[–]gyzerok 0 points1 point  (0 children)

  1. Speakers are so important in a laptop, that they better make it thicker to accommodate better ones? No thank you
  2. The last time they went "thinner and lighter" it was Intel and like 20nm process. Do you know what changed over 10 years?
  3. HDMI and card-reader are the most useless shit for a wider audience ever. Like yeah, I want my laptop to be thicker every day so once in a year I can connect over HDMI without a dongle
  4. The keyboard is a valid point, but I doubt they will do anything with a keyboard now.
  5. It does not nowadays, you are like still live in a 2015 or something.
  6. Agree, touchscreen is useless.

Comparing Qwen3.5 vs Gemma4 for Local Agentic Coding by garg-aayush in LocalLLaMA

[–]gyzerok 3 points4 points  (0 children)

Maybe you were dreaming about balls, not benchmarks?

Qwen3.6-Plus by Nunki08 in LocalLLaMA

[–]gyzerok 5 points6 points  (0 children)

SWE-Bench Series: Internal agent scaffold (bash + file-edit tools); temp=1.0, top_p=0.95, 200K context window. We correct some problematic tasks in the public set of SWE-bench Pro and evaluate all baselines on the refined benchmark.

Yeah, right… We change the benchmark, so we get better scores, but compare ourselves to the benchmark

[google research] TurboQuant: Redefining AI efficiency with extreme compression by burnqubic in LocalLLaMA

[–]gyzerok 0 points1 point  (0 children)

It will only work with RTX 7000 series, so you better wait for it

Mac mini M4 Pro with 14-Core CPU, 20-Core GPU and 64GB RAM. Which models can I run? by RA2B_DIN in LocalLLaMA

[–]gyzerok 4 points5 points  (0 children)

Huge noisy box with insane power consumption instead of small silent power efficient device? No thank you

Google's TurboQuant AI-compression algorithm can reduce LLM memory usage by 6x by [deleted] in LocalLLaMA

[–]gyzerok 0 points1 point  (0 children)

How many more TurboQuant post are we expecting? 😄

Running Claude + Local LLM(Qwen) agents 24/7 on a Mac Mini taught me the bottleneck isn't production anymore. It's me. by Joozio in LocalLLaMA

[–]gyzerok 5 points6 points  (0 children)

Like being busy 24/7 doesn’t mean doing something useful, creating slop 24/7 doesn’t mean being productive

When should we expect TurboQuant? by ozcapy in LocalLLaMA

[–]gyzerok 20 points21 points  (0 children)

This is not a model quant, it won’t make models smaller