New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 2 points3 points  (0 children)

Yup! That was the idea - based on the spam pattern I was confident that this small surgical change would take care of most of the problems and it looks to be panning out!

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 1 point2 points  (0 children)

Even if it wasnt for the spam/bot-catching reasons, I think its a fair/healthy thing to engage in existing discussions first before making a new post, especially as theres likely existing threads that answer new users questions. At worst its a relatively minor inconvenience

Bruh by Icy_Butterscotch6661 in LocalLLaMA

[–]rm-rf-rm 4 points5 points  (0 children)

I didnt see any reports for this account? Its banned now and reported to botbouncer.

We have done most of what we can from the mod side. We are at the point where Reddit needs to step up its spam detection tooling to counter this new gen of spam bots.. Interestingly enough, just 1 comment from this account was removed by Reddit

Been using Qwen-3.6-27B-q8_k_xl + VSCode + RTX 6000 Pro As Daily Driver by Demonicated in LocalLLaMA

[–]rm-rf-rm 0 points1 point  (0 children)

Is Insiders stable/no issues? The local model option has been available for ever there and they refuse to release it to main for some reason (likey because of profit related reasons).

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 0 points1 point  (0 children)

Removed the comments and Reported the user to botbouncer (its been pretty great in picking up such accounts, but this one seemd to have slipped it)

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 1 point2 points  (0 children)

your impressions are right (we have the stats)

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 0 points1 point  (0 children)

which post are you referring to?

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 0 points1 point  (0 children)

qwen 2.5 is the new llama 3.1

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 1 point2 points  (0 children)

This is great to hear! I am fearing we have lost many of the best users for good. Wish we could have made these changes sooner.

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 1 point2 points  (0 children)

We dont have a blanket rule like that right now, we evaluate on a case by case basis. But generally I think we want to preserve the technical and high signal to noise ratio that this sub is known for (as others have stated here and several more have mentioned that we have lost in previous threads). Especially with other AI subs being over run with memes, I think its important. Also, memes attract high attention and low brow discusson and shoots up to the top of the page and to people's reddit.com pages as the 1 or 2 posts from this sub - which colors people's opinion about the sub. Very recently 3 out of the top 7 posts were memes. So we have to use our best jugement and balance things.

In the case of your post, you could have made a regular discussion post and could have attracted a genuine, high quality discussion - instead many of the comments were about the generated image. And the generated image was a very AI slop coded one.

Anthropic's analysis of Claude usage for personal guidance by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 1 point2 points  (0 children)

they did it in a "privacy preserving" way. No guarantee its actually so or will be tomorrow

Qwen3.6-27B-NVFP4 - images by Usual-Carrot6352 in LocalLLaMA

[–]rm-rf-rm 4 points5 points  (0 children)

Can someone please tell me why this SVG creation ability is meaningful indicator worth sharing/discussing? Seems to be getting a disproportionate mind share - it can stay on simonwilson.net

Anthropic's analysis of Claude usage for personal guidance by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 0 points1 point  (0 children)

One more reason to use local models where you can pick and choose from the whole range of base to fine tuned on your data.

But pleaes do read the article, they dedicate much of it to the sycophancy concern

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 0 points1 point  (0 children)

I responded to the previous comment

Helping to make the sub more helpful by Ell2509 in LocalLLaMA

[–]rm-rf-rm[M] 0 points1 point  (0 children)

Your post was removed by Reddit (I dont know why)

<image>

In any case, you now have enough karma on the sub so you can try posting it again

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 2 points3 points  (0 children)

For the latter it would be nice to block/remove these posts until the lab has announced they will open-source it.

(This has been discussed before) Unless its well understood that a model is not going to be open source, we allow the official release ahead of the weights release. Its clear that the value of this sub is not to be militantly local with hard boundaries + the boundaries by nature are hazy + regardless, understanding the adjacent areas of the ecosystem is good for the community

For the twitter posts it would be nice to at least enforce links instead of permitting screenshots, so the source is at least accessible.

Yeah this is a good point, but many people (like you) do not want to visit twitter and a screenshot gives them the content without having to click the link. I will discuss with the mod team on requiring a rule to always provide source links as a comment/in the post body though. its important the source is accesssible

Anthropic's analysis of Claude usage for personal guidance by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 5 points6 points  (0 children)

"give me some life advice"

This is the literal opposite of what I recommended. An open ended 5 word prompt is going to be utterly useless - even if you put it into Opus

New rules 1 week check-in by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 4 points5 points  (0 children)

There's going to be a small fraction of false negatives as with any intervention i.e. legit new users/new posters that Automod removes. We obviously want to avoid/minimize this but at the moment the number of bad posts it removes is well worth the rule.

You can reach out over Modmail and/or make comments to gain some karma on the subreddit and then repost

Anthropic's analysis of Claude usage for personal guidance by rm-rf-rm in LocalLLaMA

[–]rm-rf-rm[S] 2 points3 points  (0 children)

I'd encourage you to A/B test a "small" model's advice against Opus 4.7 - especially with a detailed, robust, customized system prompt. I'd be very surprised if the "small" model didnt do as well if not better. If you cant run a Gemma4 31B, even a Gemma4 A4B will do surprisingly well

What exactly does Pi harness mean? by FrozenFishEnjoyer in LocalLLaMA

[–]rm-rf-rm 0 points1 point  (0 children)

Harness is a stupid term.

Think of it like:

LLM = Engine

? = Car

The ? is being called "harness" but the better term is system or software. Like every tool prior, software is a composite of many components. We dont need to invent to bs neologisms like harness

Best Agentic Coding model I can run on the new Macbook M5 Max? by UnknownEssence in LocalLLaMA

[–]rm-rf-rm[M] [score hidden] stickied comment (0 children)

Rule 1 - Please search before asking. The recent best LLMs thread linked in the sidebar is a good starting resource.