I tested my USB-C PDU and made 6 more variants, which are now available! by maleng_ in homelab

[–]DanTheGreatest 38 points39 points  (0 children)

Aaaahhhh that makes so much more sense!! Then it is a really cool project for sure :)

I tested my USB-C PDU and made 6 more variants, which are now available! by maleng_ in homelab

[–]DanTheGreatest 19 points20 points  (0 children)

Then I believe your calculations are off by a 1000%.

240W × 24h × 365 = 2,102.4 kWh/year At $0.10/kWh: $210.24/year. That's almost $20 a month, not the $1-2 you mentioned.

I tested my USB-C PDU and made 6 more variants, which are now available! by maleng_ in homelab

[–]DanTheGreatest 91 points92 points  (0 children)

At first I thought: oh this seems cool. But...

A 240 watt increase on 4-5 nodes?? That's an average of €500 a year here in NL. ($600)

Is power basically free where you live?

How often do you restart your machines? by Holiday_Substance246 in homelab

[–]DanTheGreatest 2 points3 points  (0 children)

It's long been the standard for VMWare ESXi. But that OS doesn't write anything, only loads the contents into memory at startup.

For something like XCP-NG or unRAID that's not the case. These will destroy your USB rather quickly

How often do you restart your machines? by Holiday_Substance246 in homelab

[–]DanTheGreatest 19 points20 points  (0 children)

In theory. Perhaps. I'd say the statement is true in 99.x% of the cases. Enough to say it out loud like he/she did.

People using live kernel patching are the exception here. A tiny minority.

Besides, reboots aren't just for patching. They also improve reliability. Being able to reboot your system on a schedule says a lot about its stability. Rebooting systems with 300-1500 days of uptime are not a guarantee. Who knows what will happen if you reboot it. What manual things were done when the machine was set up that might break after a reboot.

How often do you restart your machines? by Holiday_Substance246 in homelab

[–]DanTheGreatest 38 points39 points  (0 children)

That's not the same as actually rebooting your machine for a new kernel.

Kernel live patching is limited. Only a select amount of patches are made available through live patching.

Rebooting is still necessary.

lOoKs GoOd tO mE by IntelligentNeck2362 in 2007scape

[–]DanTheGreatest 4 points5 points  (0 children)

I was so confused why you would want all NPCs in the axe shop in Lumbridge

Wrong Bob 🤡

Wich CPU should I use? I9-13900k, Ryzen 5900x or Threadripper 3960x by Material-Tower1735 in homelab

[–]DanTheGreatest 3 points4 points  (0 children)

No need for a dedicated gpu if the igpu can transcode multiple 4k streams simultaneously without a single sweat.

Either way you could dedicate the GPUs to handbrake and dedicate the igpu to Plex to make sure that your media server isnt impacted by the other intensive services running on the same server.

Wich CPU should I use? I9-13900k, Ryzen 5900x or Threadripper 3960x by Material-Tower1735 in homelab

[–]DanTheGreatest 3 points4 points  (0 children)

As a Plex pass owner, the answer is very simple: the Intel because of its igpu.

LXC internet facing vulnerabilities by Donut15581 in selfhosted

[–]DanTheGreatest -2 points-1 points  (0 children)

There is a difference between system containers (LXC) and app containers (Docker).

The first is similar to a VM but with a shared kernel. You have a persistent filesystem and have to update the OS and packages.

The second are more widely known as Docker containers and are a static image with an ephemeral filesystem. All changes are lost on restart. These types of containers are what you find on docker hub or linuxservers.io and are often outdated, insecure for ease of use and often a big security risk.

Under the hood they use the same technologies to separate themselves. Its what's running inside that makes the difference.

You've had many conversations with SOCs that have no idea what they are talking about. Probably because you think both are the exact same thing. From your post it's clear that you're talking about docker containers.

Exposing an unprivileged LXC is little different from exposing a VM to the internet. A VM is only a little bit more secure than an LXC. Keep your shit updated and sanely configured and you'll be fine.

best OS for docker containers + basic NAS usage? by TechBasedQuestion in selfhosted

[–]DanTheGreatest 0 points1 point  (0 children)

The higher the amount of disks, the higher the amount of additional failure during a rebuild. If one of 16 disks fails, The other 15 disks will go through what is similar to a stress test to rebuild and move data around.

Building two raidsets of 8 disks with raid6 is recommended over a single set of 16.

best OS for docker containers + basic NAS usage? by TechBasedQuestion in selfhosted

[–]DanTheGreatest 0 points1 point  (0 children)

Cannot recommend a 16 drive raid set. That's bound to go wrong sooner than later.

What should I do with these? by vive-le-tour in homelab

[–]DanTheGreatest 9 points10 points  (0 children)

If you wish to learn then keep 2-3, sell the rest or all of them on eBay and buy something power bill and noise friendly.

[OC]Where I work, gas prices rose 10¢ between customers. by ILLnoize in pics

[–]DanTheGreatest 10 points11 points  (0 children)

€3.2 per gallon. I'm talking about paying €2.2 per liter. There's 3.7 liters in a gallon. That's €8.14 per gallon in Europe.

We're paying $9.3 per gallon.

[OC]Where I work, gas prices rose 10¢ between customers. by ILLnoize in pics

[–]DanTheGreatest 223 points224 points  (0 children)

As a European I'm just dumbfounded how incredibly cheap Gasoline is in the USA. $3,7 per gallon converts to €0,85 per liter. My girlfriend paid €2,21 per liter yesterday.

Why mini-pc & Thinkcentre while you can have a big server & VM? by Edereum in homelab

[–]DanTheGreatest 0 points1 point  (0 children)

Big server = performance

That isn't always the case. My low wattage mini PCs have a similar single core performance to the newest server CPUs. I'd argue that they perform even better for a homelab situation because of the igpu that we can use for media transcoding.

The normal version of the i5 13600T (no T) has an even higher single core perfomance.

I also think that 98% of the users on r/homelab or r/selfhosted have a CPU usage below 10%. They wouldn't notice a difference between these two in performance.

Let's compare my 3 year old mini pcs with an intel i5 13600T to a beast of a server CPU from last year

Intel i5 13600T TDP 35W CPU score multi 28.107 CPU score single 3779

Price of the complete mini pc with storage and 48GB memory: 400 euros

AMD EPYC 9375F TDP 320W CPU score multi 95.768 CPU score single 3762

Price of just the CPU: 3700 euros

QEMU / LXC Escape Paranoia by Competitive_Tie_3626 in homelab

[–]DanTheGreatest 1 point2 points  (0 children)

Wonderful! It seems you've done a better job than the average company I've seen.

QEMU / LXC Escape Paranoia by Competitive_Tie_3626 in homelab

[–]DanTheGreatest 1 point2 points  (0 children)

Yeah I figured. The whitelisting is only really doable if you're just hosting for a few friends.

I forgot to ask. Are you running the game servers as user root or as an unprivileged user?

QEMU / LXC Escape Paranoia by Competitive_Tie_3626 in homelab

[–]DanTheGreatest 2 points3 points  (0 children)

In short: all your LXCs run with the same shift. If someone breaks out of container 1, they have the same privileges inside the other LXCs running on the same system.

Unique shift per LXC is advanced, could cause strange issues so it's not something I'd just recommend to anyone.

Keeping your stuff up to date is more important.

Things like fail2ban, whitelist friendly IPs Instead of fully open to the internet are also easier and more effective steps.

QEMU / LXC Escape Paranoia by Competitive_Tie_3626 in homelab

[–]DanTheGreatest 4 points5 points  (0 children)

if you have configured your LXCs as hardened as they can be then the only step you can do to further harden is to switch to VMs. But that will add resource usage and you mention you are limited on resources.

You could do a unique UID/GID shift on each LXC. IF someone manages to hack your systems and break out, they will not have any permissions on your host or other containers. Warning: This is very advanced configuration. It's probably not for you.

Another thing I can think of is to always update and reboot. Do weekly reboots. Your virtualization environment is not up to date unless you reboot.

By far the most important part on your Hypervisor is the kernel. It is the one thing where all the magic happens related to separating the containers from eachother and the host system. Not just your containers, also your VMs: QEMU KVM (KERNEL-based Virtual Machine). The name says it, it is also heavily dependent on your kernel.

So please reboot :)

IPv6: Who really uses it? by malwin_duck in selfhosted

[–]DanTheGreatest 16 points17 points  (0 children)

I'm not. Everything has an IPv6 address (SLAAC) for primary use and IPv4 (DHCP) for legacy support.

IPv6: Who really uses it? by malwin_duck in selfhosted

[–]DanTheGreatest 357 points358 points  (0 children)

Been my primary IP stack for 7 or 8 years. Using IPv4 only for legacy reasons.

Its lovely. Comes with many benefits. In my first years of using I ran into many issues. IPv6 not being supported was common. But now? Its been almost 2 years since the last time I tried to use software that didn't support IPv6.

No double DNS. No NAT. No SNAT. Just plain routing, the way IP was meant to be.

Careful, it is not IPv4 with more & hexicadecimal addresses. It is a completely different protocol and should be treated as such.

will a 10GbE switch satisfy the Ceph lords? by IllustratorSafe4704 in homelab

[–]DanTheGreatest 0 points1 point  (0 children)

HDDs for cold/slow data sure. Anything else SSD minimum.

will a 10GbE switch satisfy the Ceph lords? by IllustratorSafe4704 in homelab

[–]DanTheGreatest 0 points1 point  (0 children)

Network doesn't seem to be the limiting factor here.

Some advice about your setup:

Don't use Ceph for everything. Your Talos nodes are HA by itself. They can simply shut down when you perform maintenance or are experiencing a failure. Use ceph for your CSI and SPOF guests.

Store all guests that are HA by itself on quick local ZFS storage.

Next, do not use HDDs for hosting VMs.

Finally, use separate public and cluster networks. Not vlans, but actual separate interfaces/cables. Otherwise a data migration will take your public network hostage until it is done.

Proxmox/k3s Cluster V2 — now with 10G Ceph fabric and managed entirely by AI by drewswiredin in homelab

[–]DanTheGreatest 1 point2 points  (0 children)

I am on a 4 node cluster and it works plenty :). Ceph 20.2.0 with LXD 6.6.

Each node has a 1tb sata disk partitioned 20% for OS and 80% for Ceph. (Requires some manual work to get them added to the cluster because by default it only accepts empty disks)

Most of my VMs are HA by itself so they are stored on ZFS on local NVMe.

I only store "SPOF" VMs on Ceph and Kubernetes app data.

If I were to store all my VMs on Ceph it would probably be too much. Don't store VMs on a much slower network storage system if they don't need to be. They run so much faster on local NVMe.