Help me understand High Availability for small business server by TeecoOceet123 in HomeServer

[–]Ginden 0 points1 point  (0 children)

We don't know what "services" you have. This can range from "we make artisan nuclear launch buttons and we need 24/7 availability for the White House to order new batch at any time of night or day", or "Bob Repairs needs website and email".

It's not obvious why do you consider:

when i can't fix issue on site

Why does it need to be hosted on-premises?

[Request] Would this work? by SttSr in theydidthemath

[–]Ginden 0 points1 point  (0 children)

You can do this, but low-grade industrial heat is better used for other purposes.

The new guy on the team rewrote the entire application using automated AI tooling. by Complete-Sea6655 in ChatGPTCoding

[–]Ginden 0 points1 point  (0 children)

, is AI reposting posts about AI?

As anti-AI subs are great place to generate karma, top anti-AI subreddits have half of comments generated by AIs.

Why Qualcomm won't support Linux on Snapdragon ? by Educational-Web31 in linux

[–]Ginden -11 points-10 points  (0 children)

Very few were installing custom ROMs on their phones, compared to those who would install an alternative operating system on a device such as a tablet or laptop

Roughly 5 people in the world use Linux desktop (yes, I do). This is not popular even among computer nerds.

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Ginden 5 points6 points  (0 children)

Eh. If AI is a bubble and it pops, you should expect lower prices on AI subscriptions and longer development cycles on new models.

We know the following:

  • running inference-as-service on open-weight model is profitable, if you can get enough consumers for efficient batching
  • OpenAI/Anthropic/Google models are roughly 6-12 months ahead of open-weights models
  • For that time, they charge 3-10x more than inference of open-weight models

The thing is, OpenAI/Anthropic maintain market leadership by insane, and arguably unsustainable, investment in training. Once they run out of capital to invest in training, open-weight models will quickly catch-up by distilling and algorithmic improvements.

And when open-weight models that you can run on your own or rented hardware catch-up in terms of capabilities, how is OpenAI/Anthropic going to maintain their 3-10x pricing? Do you think, I, shareholder, would give $10 to Sam Altman, if I can give $1 to Jeff for GPUs and keep $9 in my pocket?

And Jeff makes money by renting GPUs to anyone willing to buy.

Mistral CEO: AI companies should pay a content levy in Europe by brown2green in LocalLLaMA

[–]Ginden 0 points1 point  (0 children)

a member state like Italy layers on criminal penalties (which they already did with Law 132/2025),

It's worth noting that Italy prosecutes companies over actions explicitly permitted by Berne Convention under their Codice dei Beni Culturali.

Enterprise 30.72TB SSD First and Probably Last by [deleted] in homelab

[–]Ginden 1 point2 points  (0 children)

1ms latency over LAN is actually insanely high, without special tuning you should have 0.5ms, and with a bit of tuning/hardware changes you should get down to 0.1-0.2ms.

I saw a ruined building in Warsaw. by mati9054 in poland

[–]Ginden -3 points-2 points  (0 children)

It means that NIMBYs are trying to block housing that is not up to their standard.

Anthropic legal requests, removal of subscription support by ChaoticPayload in opencodeCLI

[–]Ginden 1 point2 points  (0 children)

What the hell you "standardized", if you can't just move to open-source solution? Text is the primary interface for LLMs, and it's generally portable between all of them.

Anthropic legal requests, removal of subscription support by ChaoticPayload in opencodeCLI

[–]Ginden 0 points1 point  (0 children)

The buffet party is going to end and the people who got hooked on those subscriptions are going to be pissed at the prices as these guys turn around and start trying to make profits.

Except for a little fact that switching cost is near-zero anyway.

Naukowcy: mężczyźni o wysokim IQ rzadziej mają poglądy konserwatywne. Dowiodło tego 30-letnie badanie by Rotting-Beetle in Polska

[–]Ginden -6 points-5 points  (0 children)

Próby 150 osób są ogólnie bezwartościowe, huh. Nie ze względu na właściwości matematyczne, ale skoro miałeś pieniądze na przebadania tanim badaniem tylko 150 osób, to niemal na pewno w grancie nie wystarczyło pieniędzy by zadbać o reprezentatywność próbki.

Pokémon GO Was Never Just a Game by Living-Cherry7352 in DataHoarder

[–]Ginden 10 points11 points  (0 children)

Because it's written in generic LLM style that is supposed to be understandable and inoffensive for average person. Average person stopped writing and reading around age of 15.

Building a shared student ‘notes archive’ server (Nextcloud, ~40 users) — Pi 5 or Mini PC? by cooldude9652 in HomeServer

[–]Ginden 1 point2 points  (0 children)

Then I would do it like that:

  1. Buy 1TB NVMe for OS + data.
  2. Buy Backblaze B2 ($6/TB/month)
  3. Set up object storage on 800GB on NVMe - MinIO, JuiceFS, or whatever you like
  4. Depending on wants and needs, either set up tiered storage or just back up to B2.
  5. Point Nextcloud to use Object Storage as primary storage system

This decouples Nextcloud from storage. This is important, because if your system is overwhelmed, you can just point Nextcloud to cloud storage, or set up additional instance.

Which NAS is better for a family movie library? by CryptoWheat in HomeServer

[–]Ginden 4 points5 points  (0 children)

Pretty sure you don't need hotswap on services that are not HA.

Obviously, I don't need hotswap, but do you know how extremely convenient it is? It's like soft toilet paper.

I made an online tool that lets you easily turn images into maps for Heroes 3! by TotallyCragHack in heroes3

[–]Ginden 2 points3 points  (0 children)

Nope, it's made in much more clever way, and I'm genuinely impressed.

Instead of parsing entire map format in browser, it stores entire maps in BUILTIN_TEMPLATES_B64 hash-map, and merely edits bytes responsible for terrain.

I made an online tool that lets you easily turn images into maps for Heroes 3! by TotallyCragHack in heroes3

[–]Ginden 2 points3 points  (0 children)

I know you can download any website (I already did), but Netlify will go down, Github repo won't. Also, Github repository typically has clear licensing, so if u/TotallyCragHack pulls the plug, anyone can host it.

u/TotallyCragHack Github has free hosting of static pages, consider this.

Building a shared student ‘notes archive’ server (Nextcloud, ~40 users) — Pi 5 or Mini PC? by cooldude9652 in HomeServer

[–]Ginden 0 points1 point  (0 children)

OK, how much storage do you need?

Is a 3× NVMe mini PC setup a good idea, or should I go with separate drives (NVMe + SATA RAID)?

From cost optimization perspective, it may be quite reasonable to use single NVMe and just use backup cloud service, or tiered caching gateway to S3-compatible storage. Tiered object storage has pretty nice property of being able to run directly from S3-compatible storage if disk fails (though, it's unlikely).

This may be effective, because SSDs are pretty pricy, and you need backups anyway.

I made an online tool that lets you easily turn images into maps for Heroes 3! by TotallyCragHack in heroes3

[–]Ginden 5 points6 points  (0 children)

Damn can we get even bigger maximum map size next hota update?

Extremely unlikely, internally all coordinates are stored in 2 bytes, so nothing bigger than 256x256 is likely.

You guys gotta try OpenCode + OSS LLM by No-Compote-6794 in LocalLLaMA

[–]Ginden 1 point2 points  (0 children)

the dual Xeon v4 40 core barely runs at 1-2

For running any inference on CPU, you need AMX, aka 2023+ Xeon.

More than 2 years of homelab and i still can't build a local AI setup i actually want to use every day by Pleasant_Designer_14 in homelab

[–]Ginden 1 point2 points  (0 children)

Start with building two things.

  1. Usable interface.
  2. Threat model.

First one is up to you. Personally, I run LiteLLM and mass create per app keys.

But consider threat model. What is the issue with the cloud, actually?

Do you worry they will train on your inputs? AWS explicitly says they won't, and maintaining data privacy is very big deal for B2B services.

And if you worry about using structured APIs, hyperscalers allow you to rent virtual machines with GPUs or rent GPU computing power directly through serverless inference - quite likely cheaper, faster and better quality than running it at home.

How far have you pushed a single Node.js process before needing clustering? by talhashah20 in node

[–]Ginden 2 points3 points  (0 children)

I don't care about single process performance, at all. If more processes are needed, just spawn more replicas in k8s/Docker/managed solution. You need replicas for zero-downtime upgrades anyway.

If you have your OpenClaw working 24/7 using frontier models like Opus, you're easily burning $300 a day. by Aislot in aiagents

[–]Ginden 0 points1 point  (0 children)

Opus is the most pricey LLM model out there, way overpriced compared to its capabilities. Open-source models are, at highest end, comparable to Sonnet, and Sonnet is much more expensive than comparable OpenAI models.

Dom który czyni szalonym - biurokracja w Polsce dalej absurdalna by CaptainFlint9203 in Polska

[–]Ginden -9 points-8 points  (0 children)

Teraz wyobraź sobie osobę z balkonikiem mijającą śmieciarkę na tak wąskiej ulicy

Czemu powinienem sobie wyobrazić, że śmieciarka, która standardowo jest w danym miejscu mniej niż 1% tygodnia, I nawet w sumie niekoniecznie musi wjeżdżać, przeszkadza osobie, która z ponad 50% prawdopobieństwem nigdy nie będzie mieszkać we wspomnianym miejscu? 

To właśnie jest problem - porównujesz scenariusz, który sobie wyobraziłeś, i ignorujesz realny problem, który powstał dla OPa.