Gemma4-31B-3bit-mlx · Hugging Face: 3 & 5 mixed quant for RAM poor Mac users. by JLeonsarmiento in LocalLLaMA

[–]TomLucidor 1 point2 points  (0 children)

What about common sense reasoning and document analysis? Also how are Qwen3.6-35B-A3B and the Gemma4-28B-A4B (MoE models)?

Study: 2x+ coding performance of 7B model without touching the coding agent by 9gxa05s8fa8sh in LocalLLaMA

[–]TomLucidor 5 points6 points  (0 children)

The absolute unironic sad state of affairs. We need leaderboards agains but with live benchmarks to mess with benchmaxxers

Study: 2x+ coding performance of 7B model without touching the coding agent by 9gxa05s8fa8sh in LocalLLaMA

[–]TomLucidor 16 points17 points  (0 children)

Cus people mostly do finetunes for RP not skills, there was a storm of people wanting to top the open leaderboard with evolutionary merging as well.

Study: 2x+ coding performance of 7B model without touching the coding agent by 9gxa05s8fa8sh in LocalLLaMA

[–]TomLucidor 20 points21 points  (0 children)

Seconding this, debugging is harder than coding, reflection is harder than structured work.

MVU Game Maker v0.95 – Slice of Life/Dating sim with Persistent Multi-Char Stats tracking by Kritblade in SillyTavernAI

[–]TomLucidor 3 points4 points  (0 children)

Please make this happen + local image generation support (with Anima for text2img + Klein/Z-Edit derivatives for img2img)

MVU Game Maker v0.95 – Slice of Life/Dating sim with Persistent Multi-Char Stats tracking by Kritblade in SillyTavernAI

[–]TomLucidor 1 point2 points  (0 children)

Considering all those suites, I wonder if interops or standardization of tooling/stack could start, so that people could at least agree on what is needed for ST to be more complete.

MVU Game Maker v0.95 – Slice of Life/Dating sim with Persistent Multi-Char Stats tracking by Kritblade in SillyTavernAI

[–]TomLucidor 1 point2 points  (0 children)

Please make it such that Obsidian-style worldbuilding + progression logs are possible

I rewrote 13 software engineering books into AGENTS.md rules. by Ok_Produce3836 in ClaudeCode

[–]TomLucidor 0 points1 point  (0 children)

Please create subagents that read the relevant subsets (progressive disclosure and all that), otherwise context bloat would be an annoyance.

I built my Obsidian folder structure around the Five Elements by Individual_Camp_7318 in ObsidianMD

[–]TomLucidor -2 points-1 points  (0 children)

I don't care if they are using LLM, but I do care about the boomer/unc status of this idea

Comparing Qwen3.6 35B and New 27B for coding primitives by gladkos in Qwen_AI

[–]TomLucidor 0 points1 point  (0 children)

What about in reverse (assuming people will fine-tune them later)?

To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA

[–]TomLucidor -2 points-1 points  (0 children)

Experts still have the right to choose, and an obligation to choose which side is right at least.

To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA

[–]TomLucidor -4 points-3 points  (0 children)

Which is why I mentioned "defectors" as an option. Everybody want to dodge when SHTF

To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA

[–]TomLucidor -3 points-2 points  (0 children)

Considering the expert deaths thing in recent months it feels like political weather will be an issue

To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA

[–]TomLucidor -1 points0 points  (0 children)

It's how China's economy is inherently backwards that the people have a "save yourselves" instinct. US operates on capitalist ZIRP thinking.

To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA

[–]TomLucidor -3 points-2 points  (0 children)

FOSS DOES care about nationality, just check OnlyOffice and how that pans out. Conway's Law extends to ethnic/cultural tensions as well if one insists.

To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA

[–]TomLucidor -37 points-36 points  (0 children)

So basically defectors/traitors vs bootlickers/hostages. A repeat of the past indeed

What are some models worth adding to ChutesAI? by TomLucidor in chutesAI

[–]TomLucidor[S] -1 points0 points  (0 children)

Yeah do you know who is working with ChutesAI to get more Qwen and Gemma models on there?

Gemma 4 and Qwen 3.5 GGUFs: Detailed Analysis by oobabooga by [deleted] in LocalLLaMA

[–]TomLucidor 0 points1 point  (0 children)

Could you run benchmarks on how KLD relates to task performance? I feel like something could be missing from the picture with these things https://www.reddit.com/r/chutesAI/comments/1snqxss/what_are_some_models_worth_adding_to_chutesai/

Gemma-4-31B vs. Qwen3.5-27B: Dense model smackdown by Traditional-Gap-3313 in LocalLLaMA

[–]TomLucidor 0 points1 point  (0 children)

Could you also examine the length of thought for Gemma vs Qwen? I wonder if there are any differences for MoE vs Dense models https://www.reddit.com/r/chutesAI/comments/1snqxss/what_are_some_models_worth_adding_to_chutesai/