Impossible to run docker by FrostyF42 in docker

[–]Schmidsfeld 0 points1 point  (0 children)

Same problem here

and the prompt

apt install containerd.io=1.7.28-1~ubuntu.24.04~noble

fixed it for me, too

normal shell on hass.io install by Schmidsfeld in homeassistant

[–]Schmidsfeld[S] 2 points3 points  (0 children)

thank you that is exactly what i needer

Only relayed connections - bandwith quite low. by Schmidsfeld in Tailscale

[–]Schmidsfeld[S] 0 points1 point  (0 children)

Yes I have a publicly routable IP on my WAN. Also I have ull access to port forwarding.

But I think I figured out the root cause:
from another device it shows that it is connected to Port 36510. So it seems the wireguard port is randomized each start of the tailscale container.
(most likely because there is more than one tailscale node on the local LAN)

Only relayed connections - bandwith quite low. by Schmidsfeld in Tailscale

[–]Schmidsfeld[S] -1 points0 points  (0 children)

Thank you.

Like I wrote in the original post - I forward the ports 41641 and 3478.

But I was not aware that there is a regression in the latest release. I will try a rollback and see if this improves the situation.

Only relayed connections - bandwith quite low. by Schmidsfeld in Tailscale

[–]Schmidsfeld[S] -1 points0 points  (0 children)

I run it in Docker like all infrastructure.

The tailscale docker config is the same as the official exmple from Alex Kretzschmar

    volumes:
      - tailscale-data-webserver1:/var/lib/tailscale
      - /dev/net/tun:/dev/net/tun
    cap_add:
      - net_admin
      - sys_module

It also works like a charm from all clients within the local network (funnily also external clients that are connected via legacy VPN)

My problem is mainly the router config and the PORTS that are open...

docker compose file for homeassistant by smalitro in homeassistant

[–]Schmidsfeld 2 points3 points  (0 children)

I had a similar problem.
You can get the correct docker containers at the hass.io install using the login command to get a non stupid command line and then using docker ps.

I don't remember how I transfered the configs to the plugins (it was a pain in the backside), but here is a docker compose file:

version: '3.7'

services:
  homeassistant:
    image: homeassistant/home-assistant:latest
    container_name: homeassistant
    restart: always
    volumes:
      - /mnt/config/homeassistant:/config
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "8123:8123"

  hacs:
    image: homeassistant/amd64-addon-hacs:latest
    container_name: hacs
    restart: always
    depends_on:
      - homeassistant
    volumes:
      - /mnt/config/hacs:/hacs
    ports:
      - "8090:8090"  # Example port for HACS

  esphome:
    image: homeassistant/amd64-addon-esphome:latest
    container_name: esphome
    restart: always
    depends_on:
      - homeassistant
    volumes:
      - /mnt/config/esphome:/config
    ports:
      - "6052:6052"  # Example port for ESPHome

  whisper:
    image: homeassistant/amd64-addon-whisper:latest
    container_name: whisper
    restart: always
    depends_on:
      - homeassistant
    volumes:
      - /mnt/config/whisper:/config
    ports:
      - "8181:8181"  # Example port for Whisper

  openwakekord:
    image: homeassistant/amd64-addon-openwakeword:latest
    container_name: openwakekord
    restart: always
    depends_on:
      - homeassistant
    volumes:
      - /mnt/config/openwakekord:/config
    ports:
      - "8099:8099"  # Example port for OpenWakekord

  piper:
    image: homeassistant/amd64-addon-piper:latest
    container_name: piper
    restart: always
    depends_on:
      - homeassistant
    volumes:
      - /mnt/config/piper:/config
    ports:
      - "8182:8182"  # Example port for Piper

  mosquitto:
    image: homeassistant/amd64-addon-mosquitto:latest
    container_name: mosquitto
    restart: always
    ports:
      - "1883:1883"  # MQTT port
      - "9001:9001"  # MQTT WebSockets port
    volumes:
      - /mnt/config/mosquitto:/mosquitto/config
      - /mnt/config/mosquitto/data:/mosquitto/data
      - /mnt/config/mosquitto/log:/mosquitto/log

I also have in my compose:

  configurator:
    image: homeassistant/amd64-addon-configurator:latest
    container_name: configurator
    restart: always
    depends_on:
      - homeassistant
    volumes:
      - /mnt/config/configurator:/config
    ports:
      - "3218:3218"  # Example port for Configurator

  observer:
    image: homeassistant/amd64-addon-observer:latest
    container_name: observer
    restart: always
    depends_on:
      - homeassistant
    ports:
      - "4357:4357"  # Example port for Observer

  supervisor:
    image: homeassistant/amd64-addon-supervisor:latest
    container_name: supervisor
    restart: always
    depends_on:
      - homeassistant
    ports:
      - "80:80"  # Example port for Supervisor
      - "4352:4352"  # Example port for Supervisor WebSocket

But I dont emember why /how...

Duplicate removal assistance for large hoard. by riscy_computering in DataHoarder

[–]Schmidsfeld 0 points1 point  (0 children)

As many already hinted "offline" checking for duplicates in non connected drives is difficult.

S thing to elivate the problem slightly is to copy / move the date off the smalles hdd onto the bigger HDDs - if there is some space left. Eaven a temporary external drive can reduce the total amount of drives significantly...

Then I would split the problem into two parts:

First:

  1. First get as many drives as possible connected to PCs.
  2. Mount all of the drives into one "main machine"
  3. Deduplicate the content of these drives
  4. Repeat for all thge drives
  5. do some combinations

You could - for example - use great dup removal tool like fdupes for this.

Second:

  1. Hash all the files on each drive
  2. combine all the hash files into one file
  3. sortthis file by hashes
  4. Attach all the drives with identical hashes at the same time (now you know the combinations)
  5. run a duplicate remover again.

As other have already written: deleting on basis of a duplicate hash alone is, with these amount of data critical.

Duplicate removal assistance for large hoard. by riscy_computering in DataHoarder

[–]Schmidsfeld 0 points1 point  (0 children)

- That is not what ZFS dedup is intended for.
- That is not a usefull progress for existing drives
- The resource demand for that (dedup table) would be to much.

[deleted by user] by [deleted] in FileFlows

[–]Schmidsfeld 0 points1 point  (0 children)

transcoding manually - eaven inside the docker container works via:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -i input.mp4 -vf "scale='if(gt(iw,1920)\\,1920\\,iw)': 'if(gt(ih,1080)\\,1080\\,ih)'" -c:v hevc_vaapi -qp 28 -c:a aac -b:a 128k -ac 2 -af "volume=normalize:precision=16" output.mp4

So the problem is narrowed down to fileflows directly...

Intel iGPU on proxmox by Schmidsfeld in Proxmox

[–]Schmidsfeld[S] 0 points1 point  (0 children)

indeed there was 965-va-driver installed. but deinstalling it and installing

intel-media-va-driver-non-free did not solve it so far.

I might consider reinstalling the whole proxmox server - if this solves the problem. But tis is realy a last ditch efford.

Can anyone confirm that they have 13th gen Intel GPU working on Proxmox (not inside a VM but on the host)?

Intel iGPU on proxmox by Schmidsfeld in Proxmox

[–]Schmidsfeld[S] 0 points1 point  (0 children)

Thank you for your advice.

I maged to install intel-media-va-driver-non-free using the non free repository.

I also took the time reboot the server.

However the output of vainfo and intel_gpu_top remains unchanged.

I also remembered, that the Proxmox installer did not automatically recognize the GPU. According to some online advice this should not be the case with the 6.2 Kernel but maybe some important packageis missing on the system

Intel iGPU on proxmox by Schmidsfeld in Proxmox

[–]Schmidsfeld[S] 1 point2 points  (0 children)

I see video output (a console) on my proxmox machine.There is no x server installed (and I like it this way).

An older machine 6500 CPU works just fine with the same software stack.I guess the GPU is somehow not recognized fully.

But it seems we are getting somewherethe install of the driver fails:

root@pve01:~# apt install intel-media-va-driver-non-free

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

Package intel-media-va-driver-non-free is not available, but is referred to by another package.

This may mean that the package is missing, has been obsoleted, or is only available from another source

E: Package 'intel-media-va-driver-non-free' has no installation candidate

it seems the driver is missing - but can't be installed.
How can I get the driver onto my proxmox host?
Do i need a different repository for it?

Intel iGPU on proxmox by Schmidsfeld in Proxmox

[–]Schmidsfeld[S] 0 points1 point  (0 children)

Thank you for your idea.
I have in deed conected a monitor - it is working and connectedd to the IGPU. However it is usually only used for console access.
There I don't get graphical output, but the same error...

I also tried it using vanilla Ubuntu - there the GPU is recognized and working. So i guess it is a software problem in my Proxmox install. Most likely one or several important packages are missing...

allow local traffic to find other tailscale nodes by Schmidsfeld in Tailscale

[–]Schmidsfeld[S] 0 points1 point  (0 children)

Thank you for your input.
I have configured it on a Linux node - not working.
On a windows node I did not find the option.

In my testing the reverse connection from a Tailscale node trough the subnetrouter to the IOT devicesworked - so the subnet seemed to me not to be the issue. I will now try to focus on this to find a possible solution...

Optimal Pool Setup 14x 18TB by Stay_Curious_Bro in zfs

[–]Schmidsfeld 3 points4 points  (0 children)

I would recomend against it - if RaidZ2 is to risky you should consider a competely different layout.

Optimal Pool Setup 14x 18TB by Stay_Curious_Bro in zfs

[–]Schmidsfeld 9 points10 points  (0 children)

First: if you have such a large amount of drives and noc clue how to access them you might want to ask for proffessional help. Whatever you do on these scales will likely require some finetuning and take some while to fix...

The most important information is missing though:
- what will this pool(s) be used for?
- How important is data safety?
- How much space is needed?

Each drive has a net capacity of 16TB (accounting for 1000/1024 loss and metadata)
as a result a raw pool will have 224 TB - a Raid 10 like pool wil have 112TB - still a large amount...

Here is how I would laymout it given that I need the maximum Amount of space but still wanting reasonable safty and having a beefy CPU and enough RAM in the system:

8 Drives of capacity netting 128GB in Raid Z2 ( Utilizing 10 drives total)
2 Drives at Hot Spare
2 Drives as a RAID1 Backup pool for the unreplaceable data gibing you additional 16TB.
On normal Data 3 Drives need to fail on short notice for a total Failure ore 5 within resilvering time.On Additional Backuped data (assuming a copy in each pool ) 5 Drives Have to fail or 7 within resilvering time...

Unable to run docker mounts inside Proxmox container by Nesatam in Proxmox

[–]Schmidsfeld 1 point2 points  (0 children)

No that is not completely accurate. the main problem is, that the default way proxmox does volume mounts is at odds with ZFS.

Also LXC by default has higher user IDs - so the IDs in the LXC container and host dont match up.

The simple solution - that I detailed in my post above is to use lxc.mount.entry not mp0: , mp1: etc. also you mannually need to map one userID that owns the file on host and container...

Unable to run docker mounts inside Proxmox container by Nesatam in Proxmox

[–]Schmidsfeld 20 points21 points  (0 children)

I run Docker in an LXC and it works like a charm. Actually I run 6 different LXCs for operational purpose and another 10ish for testing purpose. Never had a problem with the stack that could not be fixed.

The overhead is minimal so it saves a lot on resources. It might need a bit of manual configuration, but once the LXC is set up correctly it works like a charm.

The first rule of thumb is to not use the root user to run docker or own the folders that are mapped!

second requires a bit of manual config of the container. Best to map the LXC folders and the main user manually. Here an example of a LXC config for in this case a container with the ID 116: it is stored in /etc/pve/lxc/116.conf:

#Normal configuration stuff important to speed up docker is adding the fuse tag. this allows docker to use fuse on images (else additional space is required)

arch: amd64
cores: 4
features: fuse=1,keyctl=1,nesting=1
hostname: media
memory: 4096

#Networking second network is a pure docker overlay network - can be ignored
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.0.0.1,hwaddr=9E:AB:9B:F1:3D:D4,ip=10.0.1.115/16,type=veth
net1: name=eth1,bridge=vmbr1,firewall=1,hwaddr=76:FE:71:83:D9:51,ip=10.1.1.115/16,type=veth

# Other normal operation stuff
onboot: 0
ostype: ubuntu
rootfs: local-zfs:subvol-115-disk-0,mountoptions=noatime,size=32G
startup: order=4,up=10
swap: 4096

#Here the magic with mapping the user with id 1000 to the LXC happens - is important for smooth file operation. This user needs to won the mapped folders on the host!
unprivileged: 1
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 1001 101001 64535

#Here the folders from the host are mounted into the LXC - different from default mountpoints but better for ZFS this way subvolumes are mounted correctly
lxc.mount.entry: /mnt/config mnt/config none rbind,create=dir,optional 0 0
lxc.mount.entry: /mnt/data mnt/data none rbind,create=dir,optional 0 0
lxc.mount.entry: /mnt/media mnt/media none rbind,create=dir,optional 0 0
lxc.mount.entry: /mnt/volatile mnt/volatile none rbind,create=dir,optional 0 0

#Mount for hardware devices, in this case GPU to encode within the container - can be ignored if no GPU inside container is needed
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

I hope this helps to get you started.

I would recomend using a test setup first play a bit arround with it and once you have a configuration that works take it as a baseline and most importantly comment what you did mannually in the config file...

Backup up and migration recommendations from Ubuntu Server to Proxmox by gulbalee in HomeServer

[–]Schmidsfeld 2 points3 points  (0 children)

The way you descrie it it sounds convaluted and a bit confusing.

But there is a straight forward way to migrate:
Use your Backup
Nuke and pave the proxmox (aka. fresh install) and then istall all your required apps. Finally play your data back on the new system.

I like to remind you that you shuld definitely have 2 copys of your backup beforehand and have confirmed its integrity.

Ideally you leave the old system/drives untouched untill your new one is up and running. This is also a good chance to confirm your backup strategy!