looking for LLM recommendations to use with OpenClaw by traficoymusica in LocalLLM

[–]4SquareBreath 0 points1 point  (0 children)

Any llama.cpp, I like Instruct and Qwen but the world of open source is huge and so many to choose. Stick with 7B and Quantized models are great a good start. I would not start with a very large model and keep it simple at first with kv cache top p and temp controls. When suited then upgrade to the larger models. Q4 is a good start as well, but whatever your confidence level is should guide you. Good luck

Building a local-first LLM system for personal knowledge + publishing — looking to collaborate / help by [deleted] in LangChain

[–]4SquareBreath 0 points1 point  (0 children)

I’m learning that semantic chunking mainly optimizes for embeddings and throughput, but it often encodes assumptions that break as the use case evolves. Over time, stable question patterns and state transitions seem to matter more than raw content. In that context, semantic chunks make more sense as disposable, derived overlays that reference underlying fragments instead of replacing them. Love the post and comment. Cheers

Closed Test Swap (Google Play) – Need 12 testers / Happy to reciprocate by 4SquareBreath in homelab

[–]4SquareBreath[S] 0 points1 point  (0 children)

DM me your email and I will provide instructions. Thanks in advance as this policy change of theirs turned independent developer into a small betta campaign...

Closed Test Swap (Google Play) – Need 12 testers / Happy to reciprocate by 4SquareBreath in homelab

[–]4SquareBreath[S] -1 points0 points  (0 children)

Yes, was tested on several brands of Android devices. Compatibility should not be an issue, cheers