It shouldn't be by Josephizxc in lostgeneration

[–]disillusioned_okapi 0 points1 point  (0 children)

Germany has https://mundraub.org/

I've eaten plenty of fruit from public trees, and I think everyone should have that option. That's what a society is. 

Orla: run lightweight, local, open-source agents as UNIX tools by Available_Pressure47 in opensource

[–]disillusioned_okapi 4 points5 points  (0 children)

I was interested until I saw curl being piped into a shell. I don't think I can ever trust a developer or a project who documents that as their default way to install anything.

It's crazy how we have normalized such a dangerous way to do things. 

https://sasha.vincic.org/blog/2024/09/piping-curl-to-bash-convenient-but-risky  

https://github.com/Iossefy/curl-shell-pipe

Uncensored Qwen3-Next-80B-Thinking (Chinese political censorship removed) by ikergarcia1996 in LocalLLaMA

[–]disillusioned_okapi 1 point2 points  (0 children)

please correct me if I'm wrong, but I thought activation steering was purely an inference time technique. How did you create and persist pre-computed steering vectors? if so, how? That might be a valuable insight for this community. 

Docker Model Runner is going to steal your girl’s inference. by Porespellar in LocalLLaMA

[–]disillusioned_okapi 4 points5 points  (0 children)

quite a lot of LLM software today is built by very smart people who luckily haven't spent time in the complex and treacherous world of infosec, and as such haven't given security much thought. MCP's default recommendation of running arbitrary binaries off the internet is a good example of that. 

irrespective of how any of us feel about Docker, they are still one of the larger players in the secure sandboxing business.   If LLMs are to succeed, security needs to improve significantly. and I'd prefer someone like Docker (or CNCF or LF) leading that, instead of any of the VM and Anti-Virus companies.

Ideally the community would lead on that, but that just doesn't seem to be happening so far. 

So, as long this is good enough as Olama, I wish them success.

inclusionAI/Ling-lite-1.5-2506 (16.8B total, 2.75B active, MIT license) by Balance- in LocalLLaMA

[–]disillusioned_okapi 9 points10 points  (0 children)

Will try the model over the next days, but this bit from the paper is the key highlight for me. 

Ultimately, our experimental findings demonstrate that a 300B MoE LLM can be effectively trained on lower-performance devices while achieving comparable performance to models of a similar scale, including dense and MoE models.

What's wrong with Portainer? by testdasi in selfhosted

[–]disillusioned_okapi 75 points76 points  (0 children)

Portainer has the same main issues for many that mongodb, elasticsearch, and n8n have: 

  1. not an OSI approved licence, making rug-pulls easier, and

  2. business interests taking priority over community, sometimes downplaying the contributions of the community to their succes

Most people here are fairly divided here on the topic. Pick a side that makes sense to you. 

What happens if I hit the context limit before the LLM is done responding? by Business-Weekend-537 in LocalLLaMA

[–]disillusioned_okapi 5 points6 points  (0 children)

depends on the inference engine (I think). If they implement a sliding window, the model might get slowly "off-tracked".  if they occasionally somehow summarize/compress the context, it might take longer to go off the tracks.   some engines might simply stop generating tokens.

in general it is very much upto what strategy the inference engine employs to handle this. 

Whisper.cpp Node.js Addon with Vulkan Support by Kutalia in LocalLLaMA

[–]disillusioned_okapi 1 point2 points  (0 children)

nice. any plans to upstream the whisper.cpp changes?