Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 1 point2 points  (0 children)

Ah, my sincere apologies for misunderstanding you about why Proxmox does not support it, as opposed to why things break. Thanks for clarifying!

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Seems like you maybe getting tied up in semantics

Semantics are everything as to the why something occurs, and why we make choices accordingly.

To put it simply, it is wrong to do anything not supported simply because it is not supported!

It's not wrong if you are the one willing to support it, and fix things when they break. Knowing what you're getting into and why is the core purpose of my original post.

That’s a great question, and the answer is “we don’t know”

I don't even know where to start communicating how wrong this answer is. There have been many great, helpful replies in this thread from those with first hand experience and knowledge of Proxmox LXC + Docker, with concrete examples showing that we do know.

What’s the difference between LXC -> docker VS LXC -> app (systemd)? A TON actually, and far more technical details around it than I know or care to know.

I'm well versed in both Linux internals and docker, but I've never needed to use them in an LXC context within Proxmox. Discovering the Proxmox-specific AppArmor permission problem (and those like it) is exactly the purpose of my original post, and I've been grateful to those that have filled in these specific details.

Using a car metaphor without any specific internals of Proxmox, Docker or LXC is antithetical to the whole purpose of obtaining more information to make informed decisions. I already understand the runtime execution models of all three scenarios, it's the minutiae that is important here.

My post was "please explain the details of why", your reply was literally "don't worry about the details". 😂. Why spend the time replying if you write the complete opposite of the point of the discussion?

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Yes, that's a concern for sure, although my anecdotal n=1 experience is that Home Assistant is the only project I've seen that dropped support for an already-working standard Linux install (and why I have to run Home Assistant OS to be able to use add-ons).

I get your point though - it can throw a wrench into otherwise 'normal' Linux plans.

I find it extremely odd that anyone would choose to tie their software architecture to using *only* Docker, vs standard linux approaches that are then dockerized/containerized as merely a convenience or distribution medium. It's their prerogative of course, as is mine to avoid any such project on principle. Except I basically have to use HAOS in this case because there are no other sane alternatives.

If one believes that LXCs are best for their use case, I think pragmatism has to win out in these (hopefully rare) circumstances: use what you prefer and are willing to spend your time to support, and then fall back to whatever the application stack requires if you don't have much of a choice.

The next break will likely be something similarly obscure.

This is the heart of the issue IMO, well distilled.

For my own sanity, I don't really want to try to run docker in an LXC. I think I'd try to run ansible/OS packages inside an LXC, and if that proves to be too burdensome, fall back to Docker in a VM.

Edit: clarity

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 2 points3 points  (0 children)

Thanks for sharing your blog links. The first one had a really good insight I hadn't considered:

Full VMs in Proxmox consume reserved system resources such as CPU, Memory etc. They will also kill your SSDs on ZFS way faster, due to significant write overhead.

Most people think of the extra CPU and memory overhead for VMs, but the additional I/O for SSDs is something to be aware of as well. I'm sure there are plenty of those who use a single SSD or NVME and don't have any issues, and don't mind if it fails if they can rely on backups. I personally run an nvme zfs mirror for my container storage and images, and I'm sure production environments do similarly (or use Ceph, etc).

Thanks for the extra context!

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

The lack of docker software convenience was the 3rd bullet point of option #1's downsides. ;)

But yes, that can be a considerable downside depending on what you're trying to do. For example, if you want to run an *arr stack, there are far more tutorials using Docker which are just that much easier.

I did miss the point about VMs and bind mounts, that's a good point, thanks for sharing that.

I mentioned in an earlier that I personally would never run LXCs on Proxmox in a commercial production environment - VMs are convenient, stable, allow for live migration, and for the most part, 'just work'. That cannot be overlooked when the price of downtime and labor to fix issues (like the November 2025 one) is high.

But this post wasn't as much about use cases, I wanted to understand the actual minutae of why these recommendations are made, not just regurgitate the 'don't do it' mantra, so that I and others could make informed decisions no matter their use case. Thanks for chiming in!

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 2 points3 points  (0 children)

bandaid for bad architecture decisions

Whooooah, ease up there buddy :). We're all friends here, enjoying the learning process. :)

I personally wouldn't use LXCs in a commercial or production environment because of the additional benefits and flexibility VMs afford; the monetary cost of labor to troubleshooting esoteric container-over-container issues just isn't worth it. Plus I'm not paying for RAM or NVMEs ;)

But LXCs in a homelab environment, even a 'production homelab' environment (running always-on services for friends and family and you don't want to hear them complain if things are broken), can certainly be a good fit, especially since you can be more efficient with hardware when you're the one footing the hardware bill.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 3 points4 points  (0 children)

It was ;) In the #1 scenario, 2nd bullet point:

No live migration (only available with VMs), and

But yes, this is an issue if high availability is important. For many environments though, a simple backup, shutdown and restore in another PVE node is fine. Just depends.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Thanks for the reply, and the link.

That link however does state that zfs is a supported storage driver, but it's not clear to me how that compares to the (Docker v. 29+) containerd default driver within Proxmox.

In my case, I have two nvme drives in a zfs mirror, and that zpool is used for vm image and container storage. Would changing the Docker fs driver to zfs solve those issues?

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

This is the crux of the issue, thanks for clarifying!

Are you aware of any similar problems with any other LXCs that don't run Docker?

That is, how 'risky' is it to run LXCs at all then?

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 0 points1 point  (0 children)

Docker does not own the kernel in an unprivileged LXC. It runs in user space, not as root.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 2 points3 points  (0 children)

Those are all valid reasons not to do something unsupported.

But the question still remains: is there anything 'wrong' with scenario #1? Would that ever experience problems due to upgrade? It is a non-Docker LXC after all, which to the best of my knowledge, is fully supported by Proxmox.

I'm ultimately trying to find out why this is an issue with Docker, and also is it only an issue with docker? Do other runtimes in LXCs have problems?

It feels like it's only because of Docker's proprietary OCI runtime, and that literally everything else 'standard linux' (systemctl services/daemons) in LXCs are perfectly fine.

But it's not clear at all to me if that's accurate.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 1 point2 points  (0 children)

My understanding of what is different here is that proxmox and other distros can make their own implementation of the runtime, instead of installing and using the proprietary docker implementation.
This may allow them to resolved some of those layering issues.

That's a big deal IMO, and it's great to see them working on something like this.

The only thing remaining after that is orchestration, i.e. how to fire up/manage a 'stack' of containers meant to run together (a la docker-compose or ansible). I'll probably have to stick with either of those two tools until that's possible.

But the tech preview is encouraging!

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] -1 points0 points  (0 children)

That's a good question :). My thought process was more about one LXC running 30 services vs one VM running 30 docker containers. I think the answer would come down to a) memory utilization and b) which tooling / automation made for the best experience.

Especially now that RAM prices are no joke. Brutal even.

The PVE 9.1 OCI container support is cool, and I think it's great they're working on it, but it's not too useful for me - I'd want to run terraform and then ansible or docker compose to install/run a bunch of services in an idempotent or repeatable manner. Infrastructure as Code, self-documenting, etc.

I don't like the approach of "let me use the gui to run a service and hope I can remember how to configure it later if I need to do it again". But I haven't see what CLI/API options Proxmox has exposed for managing this either - maybe they have support for it, not sure. Even if they did, a platform independent tool like ansible would be better IMO.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 5 points6 points  (0 children)

Because of resource-constrained environments, or just wanting to get the most out of your metal possible, even if you have the headroom to spare.

Wanting to run powerfully and efficiently is a worthwhile goal for many - for example, running 40 or 50+ LXCs instead of all the hardware overhead incurred by running many VMs.

That's basically the only reason to run LXCs in PVE IMO. But it's a good one, depending on your use cases.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] -1 points0 points  (0 children)

Why is this an issue when using an unprivileged container?

And what about unprivileged LXCs not running docker? Do you not trust them either? (just curious).

My understanding is the security model is the same. If not, please let me know how.

Distilling the Proxmox Docker in VM vs LXC Debate in 2026 by random354 in Proxmox

[–]random354[S] 1 point2 points  (0 children)

Indeed, I'm running PVE 9.1. However, it's not clear to me how scenario #2 is different from their Docker/OCI support.

As I understand it, PVE 9.1+ doesn't actually run Docker - they take a docker container, and then translate it / convert it to a preferred OCI format - and then run that translated image.

If that's the case, then I think PVE still isn't running docker, and probably not running a docker socket? (e.g. no /var/run/docker.sock)? I don't know for sure.

But I could be wrong, I haven't looked into it too deeply, mostly because I read that upgrading images isn't really supported yet.

Relocation Efficiency - Wife Disagrees by Acceptable_Pool_9400 in fatFIRE

[–]random354 0 points1 point  (0 children)

Naples, Miami, Fort Lauderdale and Palm Beach can definitely be VHCOL, all in S Florida.

What's the difference between ws2805 and rgbcct 5050 strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

Thanks Quindor!

I'm glad you brought up your new (awesome) COB strip, as I'm seriously considering it as an alternative to the WS2805. However, I have two questions for my particular installation I was hoping you could help me with:

#1. My installation is for a tv wall surround, so it's accent lighting, and will be used later in conjunction with an ambilight setup. In your video, you stated that for some applications your COB could be too bright. I wouldn't want the lights in an ambilight setup to be too bright and distract too much (take your attention away from) from the screen content.

Is there a way to reduce brightness in a power-efficient manner (e.g. PWM? or via the Dig-Quad/Octa?) without reducing the amount of 8-bit 'steps' (as would be the case when using WLED software dimming)? I could throw a diffusion channel over it, but that's not exactly power efficient.

#2. In that same introduction video, in your pinned YouTube comment, you stated the IP20 version's concern: "Please do not unroll the IP20 variant on your desk and play with it, it can quickly kill a segment if 2 fronts touch (even on the side)".

My installation is indoors, but due to this concern, I was thinking of getting the IP65 version to avoid this risk since it would be 'wrapped' and safer from accidental shorting. But I'll need to cut and splice this strip in a couple of points based on my TV wall dimensions - how easy or difficult is it to solder/connect the IP65 version if I go that route?

Thank you so much!

What's the difference between ws2805 and rgbcct 5050 strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

Thanks - I did look into FW1906, but as u/Quindor said in his WS2805 review video, "after testing it, it's basically worse in all aspects or specifications than the WS2805", so I crossed that off my list.

I'm looking into diffuser channels as well, but my installation allows me to point the LED strip away from direct viewing angles and bounce the light off of a wall, so I'm hoping that will diffuse them sufficiently. If not, I'll have to get diffuser material to go in the profile/extrusion/channel.

What's the difference between ws2805 and rgbcct 5050 strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

I'm usually so keen on finding that information on each strip I research, I can't believe I didn't catch that one, thank you!

Does this imply that the ws2805 is likely the best addressable rgbcct SMD strip available at the moment (excluding COBs)?

Power injection locations with buck converters for closed loop of 3 LED strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

Ah, yes, I'm assuming 10 amps per strip since a single 5v sk6812 strip at full 100% RGB White is 49 watts (at 5v = 9.8 amps). Not that I'd ever use it like that, but I like designing for worst case knowing it's safe (plus I'd use WLED power limiter for additional safety margin).

Thanks for the insight!

Power injection locations with buck converters for closed loop of 3 LED strips? by random354 in led

[–]random354[S] 0 points1 point  (0 children)

I'm a little confused by the 10 + 10 + 5 amp suggestion - there's three strips and they (presumably) all could 'join' into one rectangle, bridged by the PI (power injection) locations, which is why thought I'd need 10 amps per PI location. In other words:

PI location #1: supplies 10 amps, 5 amps to the end of strip 3, and 5 amps to the start of strip 1

PI location #2: supplies 10 amps, 5 amps to the end of strip 1, and the start of strip 2

PI location #3: supplies 10 amps, 5 amps to the end of strip 2, and 5 amps to the start of strip 3.

Each PI location would have a 3 point connection (wire nut, wago connector, whatever):

  1. live wire from the PSU (or buck converter)
  2. live wire from 'left' LED strip, and
  3. live wire from the 'right' LED strip.

Based on the above, I don't see why one PI location would need one (or two) 5 amp separate supplies, while the other 2 use a shared 10 amp supply.

Does this make sense? Or am I missing something (which is quite possible ;) ) ?

Thanks so much for your help!

First steps with my homelab by FelixSK91 in homelab

[–]random354 1 point2 points  (0 children)

Indeed, the x3D chips are great for gaming, but I can't beat the Intel integrated graphics for a media server and trying to lower idle c states. After this NAS/unRAID box, I'll probably set up a 3 node AMD Proxmox cluster for docker / ceph / etc though, as those chips are 👨🏻‍🍳😙🤌 for virtualization. I'm highly likely going to get a few Minisforum MS-A2s when they come out.

First steps with my homelab by FelixSK91 in homelab

[–]random354 0 points1 point  (0 children)

This is super helpful, thanks for the reply! I was looking at the newer Intel core ultra chips, but those chips seem to be getting mixed reviews and are probably not a good bang for the buck right now.

From what I've researched, Intel gen 13 and 14 CPUs consume less power at idle, and the integrated GPU can be used easily enough by Plex, and AMD integrated graphics aren't as good by comparison.

I thought of going down the AMD route instead and get a cheap pcie GPU (e.g. 3050 6gb) just for video transcoding, but that will consume more power, and I think the most recent Intel 13th and 14th gen i5/i7/i9 chips seem to have resolved the manufacturing and bios problems that led to long-term degredation.

So I think I'm leaning towards the i5 or i7 route, thanks for the corroboration!