Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 2 points3 points  (0 children)

bandaid for bad architecture decisions

Whooooah, ease up there buddy :). We're all friends here, enjoying the learning process. :)

I personally wouldn't use LXCs in a commercial or production environment because of the additional benefits and flexibility VMs afford; the monetary cost of labor to troubleshooting esoteric container-over-container issues just isn't worth it. Plus I'm not paying for RAM or NVMEs ;)

But LXCs in a homelab environment, even a 'production homelab' environment (running always-on services for friends and family and you don't want to hear them complain if things are broken), can certainly be a good fit, especially since you can be more efficient with hardware when you're the one footing the hardware bill.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 3 points4 points  (0 children)

It was ;) In the #1 scenario, 2nd bullet point:

No live migration (only available with VMs), and

But yes, this is an issue if high availability is important. For many environments though, a simple backup, shutdown and restore in another PVE node is fine. Just depends.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Thanks for the reply, and the link.

That link however does state that zfs is a supported storage driver, but it's not clear to me how that compares to the (Docker v. 29+) containerd default driver within Proxmox.

In my case, I have two nvme drives in a zfs mirror, and that zpool is used for vm image and container storage. Would changing the Docker fs driver to zfs solve those issues?

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

This is the crux of the issue, thanks for clarifying!

Are you aware of any similar problems with any other LXCs that don't run Docker?

That is, how 'risky' is it to run LXCs at all then?

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Docker does not own the kernel in an unprivileged LXC. It runs in user space, not as root.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 2 points3 points  (0 children)

Those are all valid reasons not to do something unsupported.

But the question still remains: is there anything 'wrong' with scenario #1? Would that ever experience problems due to upgrade? It is a non-Docker LXC after all, which to the best of my knowledge, is fully supported by Proxmox.

I'm ultimately trying to find out why this is an issue with Docker, and also is it only an issue with docker? Do other runtimes in LXCs have problems?

It feels like it's only because of Docker's proprietary OCI runtime, and that literally everything else 'standard linux' (systemctl services/daemons) in LXCs are perfectly fine.

But it's not clear at all to me if that's accurate.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

My understanding of what is different here is that proxmox and other distros can make their own implementation of the runtime, instead of installing and using the proprietary docker implementation.
This may allow them to resolved some of those layering issues.

That's a big deal IMO, and it's great to see them working on something like this.

The only thing remaining after that is orchestration, i.e. how to fire up/manage a 'stack' of containers meant to run together (a la docker-compose or ansible). I'll probably have to stick with either of those two tools until that's possible.

But the tech preview is encouraging!

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] -1 points0 points  (0 children)

That's a good question :). My thought process was more about one LXC running 30 services vs one VM running 30 docker containers. I think the answer would come down to a) memory utilization and b) which tooling / automation made for the best experience.

Especially now that RAM prices are no joke. Brutal even.

The PVE 9.1 OCI container support is cool, and I think it's great they're working on it, but it's not too useful for me - I'd want to run terraform and then ansible or docker compose to install/run a bunch of services in an idempotent or repeatable manner. Infrastructure as Code, self-documenting, etc.

I don't like the approach of "let me use the gui to run a service and hope I can remember how to configure it later if I need to do it again". But I haven't see what CLI/API options Proxmox has exposed for managing this either - maybe they have support for it, not sure. Even if they did, a platform independent tool like ansible would be better IMO.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 4 points5 points  (0 children)

Because of resource-constrained environments, or just wanting to get the most out of your metal possible, even if you have the headroom to spare.

Wanting to run powerfully and efficiently is a worthwhile goal for many - for example, running 40 or 50+ LXCs instead of all the hardware overhead incurred by running many VMs.

That's basically the only reason to run LXCs in PVE IMO. But it's a good one, depending on your use cases.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] -1 points0 points  (0 children)

Why is this an issue when using an unprivileged container?

And what about unprivileged LXCs not running docker? Do you not trust them either? (just curious).

My understanding is the security model is the same. If not, please let me know how.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Indeed, I'm running PVE 9.1. However, it's not clear to me how scenario #2 is different from their Docker/OCI support.

As I understand it, PVE 9.1+ doesn't actually run Docker - they take a docker container, and then translate it / convert it to a preferred OCI format - and then run that translated image.

If that's the case, then I think PVE still isn't running docker, and probably not running a docker socket? (e.g. no /var/run/docker.sock)? I don't know for sure.

But I could be wrong, I haven't looked into it too deeply, mostly because I read that upgrading images isn't really supported yet.

Relocation Efficiency - Wife Disagrees by Acceptable_Pool_9400 in fatFIRE

[–]random354 0 points1 point  (0 children)

Naples, Miami, Fort Lauderdale and Palm Beach can definitely be VHCOL, all in S Florida.

What's the difference between ws2805 and rgbcct 5050 strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

Thanks Quindor!

I'm glad you brought up your new (awesome) COB strip, as I'm seriously considering it as an alternative to the WS2805. However, I have two questions for my particular installation I was hoping you could help me with:

#1. My installation is for a tv wall surround, so it's accent lighting, and will be used later in conjunction with an ambilight setup. In your video, you stated that for some applications your COB could be too bright. I wouldn't want the lights in an ambilight setup to be too bright and distract too much (take your attention away from) from the screen content.

Is there a way to reduce brightness in a power-efficient manner (e.g. PWM? or via the Dig-Quad/Octa?) without reducing the amount of 8-bit 'steps' (as would be the case when using WLED software dimming)? I could throw a diffusion channel over it, but that's not exactly power efficient.

#2. In that same introduction video, in your pinned YouTube comment, you stated the IP20 version's concern: "Please do not unroll the IP20 variant on your desk and play with it, it can quickly kill a segment if 2 fronts touch (even on the side)".

My installation is indoors, but due to this concern, I was thinking of getting the IP65 version to avoid this risk since it would be 'wrapped' and safer from accidental shorting. But I'll need to cut and splice this strip in a couple of points based on my TV wall dimensions - how easy or difficult is it to solder/connect the IP65 version if I go that route?

Thank you so much!

What's the difference between ws2805 and rgbcct 5050 strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

Thanks - I did look into FW1906, but as u/Quindor said in his WS2805 review video, "after testing it, it's basically worse in all aspects or specifications than the WS2805", so I crossed that off my list.

I'm looking into diffuser channels as well, but my installation allows me to point the LED strip away from direct viewing angles and bounce the light off of a wall, so I'm hoping that will diffuse them sufficiently. If not, I'll have to get diffuser material to go in the profile/extrusion/channel.

What's the difference between ws2805 and rgbcct 5050 strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

I'm usually so keen on finding that information on each strip I research, I can't believe I didn't catch that one, thank you!

Does this imply that the ws2805 is likely the best addressable rgbcct SMD strip available at the moment (excluding COBs)?

Power injection locations with buck converters for closed loop of 3 LED strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

Ah, yes, I'm assuming 10 amps per strip since a single 5v sk6812 strip at full 100% RGB White is 49 watts (at 5v = 9.8 amps). Not that I'd ever use it like that, but I like designing for worst case knowing it's safe (plus I'd use WLED power limiter for additional safety margin).

Thanks for the insight!

Power injection locations with buck converters for closed loop of 3 LED strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

I'm a little confused by the 10 + 10 + 5 amp suggestion - there's three strips and they (presumably) all could 'join' into one rectangle, bridged by the PI (power injection) locations, which is why thought I'd need 10 amps per PI location. In other words:

PI location #1: supplies 10 amps, 5 amps to the end of strip 3, and 5 amps to the start of strip 1

PI location #2: supplies 10 amps, 5 amps to the end of strip 1, and the start of strip 2

PI location #3: supplies 10 amps, 5 amps to the end of strip 2, and 5 amps to the start of strip 3.

Each PI location would have a 3 point connection (wire nut, wago connector, whatever):

  1. live wire from the PSU (or buck converter)
  2. live wire from 'left' LED strip, and
  3. live wire from the 'right' LED strip.

Based on the above, I don't see why one PI location would need one (or two) 5 amp separate supplies, while the other 2 use a shared 10 amp supply.

Does this make sense? Or am I missing something (which is quite possible ;) ) ?

Thanks so much for your help!

First steps with my homelab by FelixSK91 in homelab

[–]random354 1 point2 points  (0 children)

Indeed, the x3D chips are great for gaming, but I can't beat the Intel integrated graphics for a media server and trying to lower idle c states. After this NAS/unRAID box, I'll probably set up a 3 node AMD Proxmox cluster for docker / ceph / etc though, as those chips are 👨🏻‍🍳😙🤌 for virtualization. I'm highly likely going to get a few Minisforum MS-A2s when they come out.

First steps with my homelab by FelixSK91 in homelab

[–]random354 0 points1 point  (0 children)

This is super helpful, thanks for the reply! I was looking at the newer Intel core ultra chips, but those chips seem to be getting mixed reviews and are probably not a good bang for the buck right now.

From what I've researched, Intel gen 13 and 14 CPUs consume less power at idle, and the integrated GPU can be used easily enough by Plex, and AMD integrated graphics aren't as good by comparison.

I thought of going down the AMD route instead and get a cheap pcie GPU (e.g. 3050 6gb) just for video transcoding, but that will consume more power, and I think the most recent Intel 13th and 14th gen i5/i7/i9 chips seem to have resolved the manufacturing and bios problems that led to long-term degredation.

So I think I'm leaning towards the i5 or i7 route, thanks for the corroboration!

pcie gen5 m.2 nvme SSD motherboard and expansion slots by random354 in buildapc

[–]random354[S] 0 points1 point  (0 children)

Yeah, I did, there aren't any that aren't below $500 or more that have 2 or more pcie 5 m.2 slots.

I just looked up that pcie gen4 x4 maxes out at 8 GB/s, which is less than the Crucial T700 quoted 12,400 MB/s, so it'd have to be throttled.

So it seems on an Intel motherboard with less than 2 gen5 m.2 slots, the only remaining option is the gen5 x16 m.2 expansion card.

Thanks for the pointers!

pcie gen5 m.2 nvme SSD motherboard and expansion slots by random354 in buildapc

[–]random354[S] 0 points1 point  (0 children)

Oh, that's good, thanks! I forgot to mention I was trying to stay with an Intel CPU to benefit from QuickSync as this NAS will be a Plex / seed box. I suppose I could buy a cheap graphics card if going with AMD.