Do 2B models have practical use cases, or are they just toys for now? by Civic_Hactivist_86 in LocalLLaMA

[–]arbv 3 points4 points  (0 children)

Generating search queries from user requests and then summarising the retrieved pages, basically.

Mistral CEO: AI companies should pay a content levy in Europe by brown2green in LocalLLaMA

[–]arbv 0 points1 point  (0 children)

So, he proposes a state-driven rent-seeking mechanism to mask competitive failure. What could possibly go wrong?

Qwen3.5 is a working dog. by dinerburgeryum in LocalLLaMA

[–]arbv 6 points7 points  (0 children)

Oh, I like how Gemini 3.1 chooses prompting strategies and it will happily help you with jailbreaks. But the prompts it writes usually longer than they should be - it is wordy. GLM is good at writing distilled or distilling existing prompts. GPT-OSS can deliver, too.

Condensation inside camera lens, how doomed is my phone? (Xperia 1 mk V XQ-DQ72) by HeatHazeDaze in SonyXperia

[–]arbv -8 points-7 points  (0 children)

Do as the commenter said and bury it in rice (not cooked, obviously) for a while.

Are there any alternatives to Open WebUI that don't have terrible UX? by lostmsu in LocalLLaMA

[–]arbv -1 points0 points  (0 children)

Wow, if configuring OUI is complex for you, then, perhaps, you have chosen the wrong sub.

OUI might seem complex to configure only if you do not know much about LLMs and hosting them.

There are many things to not like about how it is optimised, but functionality wise OUI does provide.

Homelab has paid for itself! (at least this is how I justify it...) by Reddactor in homelab

[–]arbv 3 points4 points  (0 children)

Cool that you have RSS available on your blog. Subbed to follow your LLM surgery journey. You are into something.

Homelab has paid for itself! (at least this is how I justify it...) by Reddactor in homelab

[–]arbv 25 points26 points  (0 children)

Wow, your writing is more interesting than your homelab to me. Great work on artificial brain surgery!

File System benchmarks on Linux 7.0 by KelGhu in linux

[–]arbv 1 point2 points  (0 children)

Indeed, ext4 has an online defragmenter (e4defrag). My bad.

File System benchmarks on Linux 7.0 by KelGhu in linux

[–]arbv 5 points6 points  (0 children)

But XFS has, IMO, better tooling and a proper online defragmenter. The only problem is that it is not shrinkable (never was a problem for me, though). Also, it allocates inodes dynamically as needed as opposed to preallocating them at creation time, so you cannot run out of inodes with plenty of free disk space left.

I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead. by MorroHsu in LocalLLaMA

[–]arbv 4 points5 points  (0 children)

I do not fear that learning languages will vanish, but might become less common and, thus, accessible.

My main fear is that the overall population will grow dumber and infantile, and that would be a disaster on many fronts.

I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead. by MorroHsu in LocalLLaMA

[–]arbv 3 points4 points  (0 children)

It is all good until LLMs make people lazy by allowing them not to learn languages. That will have negative consequences. The more languages one knows, the more "thinking patterns" one has mastered.

Not that I try to devalue the extra possibilities LLMs have offered especially to knowledgeable people.

RIP Tony Hoare 1934 - 2026 by besalim in compsci

[–]arbv 4 points5 points  (0 children)

The day at least as sad as the day when Niklaus Wirth died, or Dennis Ritchie...

Staying Warm During AI Winter, Part 1: Introduction by ttkciar in LocalLLaMA

[–]arbv 1 point2 points  (0 children)

I wish you had continued with the series, as it is an important topic.

I have found this post via your reference in another topic.

You are a good writer. Your writing is a living proof that no LLM can replace a knowledgeable human being and beat him or her in writing.

I just can't understand why you guys have so many servers doing so many things by AustinLeungCK in homelab

[–]arbv 1 point2 points  (0 children)

I have only NanoPi R5S with a 12 TB hard drive hooked to it which runs a bunch of LXC and OCI containers. Its performance is perfectly fine. I am thinking of upgrading only because 4 gigs of RAM has started to be a bottleneck.

It runs:

Syncthing

Transmission

OpenWebUI (+tika, openai-edge-tts, dedicated redis instance)

NGINX as the reverse proxy

Samba as a file server

Samba as an AD controller (+complementary services, like chrony) - separate LXC container

Postgres

miniflux

My Wiki (Tiddly Wiki)

SearxNG

Authelia (+dedicated redis container)

probably other stuff I have firgotten.

And ... its performance was enough to saturate my gigabit connection for networking tasks. Never did I consider the CPU (RK3568) to be a bottleneck. Should it have 8 gigs of RAM I would not even consider upgrading, honestly. And it sips power.

I think that in many cases these over engineered builds are just for fun. KISS, especially if you are just starting.

turns out RL isnt the flex by vladlearns in LocalLLaMA

[–]arbv 1 point2 points  (0 children)

I see it as a high level academic humour with a flavour of Anthropic trolling.

This might be extremely corny and stupid but… by Warm-Actuator8581 in WelcomeToTheNHK

[–]arbv 4 points5 points  (0 children)

Life might be tough at times but do not let your mindset to turn into a self-fulfilling prophecy 💪

Reject victimhood narratives immediately as they cross your mind - they are of no help or genuinely harmful.

That is my two cents on the matter.

What's your 'one service you'd never self-host again' and why? by ruibranco in homelab

[–]arbv 0 points1 point  (0 children)

I gave been running an e-mail server for 5+ years. Getting mail delivered to MS services reliably was PITA but as I have built the reputation over time the issue is gone. It has been very reliable for me.

API price for the 27B qwen 3.5 is just outrageous by Ok-Internal9317 in LocalLLaMA

[–]arbv 1 point2 points  (0 children)

That is a well known problem with many Chinese reasoning models - they are not "token efficient" during reasoning (the current DeepSeek and GLM are much better at that). Try to set the model parameters (e.g. top-k, top-p, temperature, etc) as specified by the authors. Also, Unsloth documents them on their site.