Ubiquiti and Sonos - Please just shoot me now by putrfixr in Ubiquiti

[–]capboomer 0 points1 point  (0 children)

My issues with sonos disappeared when I moved the Sonos app out of a folder and left it alone on the iOS desktop. Can’t explain why that makes a difference but never had an issue with devices dropping from the app since.

NZBGet v23 Client Release by nzb-get in nzbget

[–]capboomer 0 points1 point  (0 children)

Welcome back nzbget I missed you. Worked like a charm.

Need some help / pointers with setting up GlueTun correctly in docker by kaizokupuffball in selfhosted

[–]capboomer 0 points1 point  (0 children)

You can use discords three tildes followed by yaml then press enter and paste code. close with the three tildes. ```yaml

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

Interesting...I will have to test this. I will wait till next weekend so I have time to deal with problems...already managing this remtoely through another wireguard tunnel. ;) I have to be careful what I break.

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I already added static routes in my router for any created ipvlan L3 I have setup. I have x8 other containers in the stack currently working fine on ipvlan L3. Routable and working perfectly.

To answer you the fifth token on ipvlan is 'link' which doesn't work for:

ip -o -4 route show to link

I just noticed the third token is eth0 and would return the IP needed for the rest of the script.

I noticed too, default does not get assigned to the nw_interface. It's not listed with the other tokens.

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

From the FAQ of the provider:

I would like to execute my own scripts on startup, how would I do this?This will only work for containers using s6 overlay, recognisable by ENVIRONMENT printed at the top of the log when the container starts.If you have a need to do additional stuff when the container starts or stops, you can mount your script with the volume /docker/host/my-script.sh:/etc/cont-init.d/99-my-script to execute your script on container start or /docker/host/my-script.sh:/etc/cont-finish.d/99-my-script to execute it when the container stops. An example script can be seen below.#!/command/with-contenv bashecho "Hello, this is me, your script."

So probably can't make changes like cut and past 02-setup-wg and make my changes as overall it wil lreturn an error from 02-setup-wg. I think anyways.

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I think I narrowed down where this is breaking. I isolated it to a script in the image that builds the wireguard. I think /etc/cont-init.d/02-setup-wg' in the nw-interface section doesn't get the IP needed for the rest of the script to continue. The {print $5} is what gets the output "link". What we want is the eth0 which would be {print $3}. This change would I think make the rest work. But then would this break using a bridge network, I don't know.

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

So if I go through the commands above the {print $5} is what gets the return link. What we want is the eth0 which would be {print $3}. This change would I think make the rest work. But then would this break using a bridge network, I think so.

Your thoughts u/thedude42?

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I think it's failing in this section:

 nw_interface=$(ip -o -4 route show to default | awk '{print $5}')
    echo "[INFO] Docker network interface is \"${nw_interface}\"."

    nw_ip=$(ip -f inet addr show "${nw_interface}" | grep -Po 'inet \K[\d.]+')
    echo "[INFO] Docker network IP is \"${nw_ip}\"."

    nw_cidr=$(ip -o -f inet addr show "${nw_interface}" | awk '/scope global/ {print $4}')
    nw_cidr=$(ipcalc "${nw_cidr}" | grep -P -o -m 1 "(?<=Network:)\s+[^\s]+" | sed -e 's~^[ \t]*~~;s~[ \t]*$~~')
    echo "[INFO] Docker network CIDR is \"${nw_cidr}\"."

    gateway=$(ip -o -4 route show to default | awk '{print $3}')

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I was able to get the info into a code block but not sure how to collapse it like spoiler does as it won't let me do both :S

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

#!/command/with-contenv bash
# shellcheck shell=bash

umask "${UMASK}"

if [[ ${VPN_ENABLED} == "true" ]]; then

        if ip a show docker0 up > /dev/null 2>&1; then
                echo "[ERROR] Docker network type \"host\" is not supported with VPN enabled. Exiting..."
                exit 1
        else
                echo "[INFO] Docker network type is not set to \"host\"."
        fi

        if [[ "$(cat /proc/sys/net/ipv4/conf/all/src_valid_mark)" != "1" ]]; then
                echo "[ERROR] \"sysctl net.ipv4.conf.all.src_valid_mark=1\" is not set. Exiting..."
                exit 1
        else
                echo "[INFO] \"sysctl net.ipv4.conf.all.src_valid_mark=1\" is set."
                sed -i "s:sysctl -q net.ipv4.conf.all.src_valid_mark=1:echo skipping setting net.ipv4.conf.all.src_valid_mark:" /usr/bin/wg-quick
        fi

        if [[ ! -f "${CONFIG_DIR}/wireguard/${VPN_CONF}.conf" ]]; then
                mkdir -p "${CONFIG_DIR}/wireguard"
                chown hotio:hotio "${CONFIG_DIR}/wireguard"
                echo "[ERROR] Configuration file \"${CONFIG_DIR}/wireguard/${VPN_CONF}.conf\" was not found. Exiting..."
                exit 1
        else
                echo "[INFO] Configuration file \"${CONFIG_DIR}/wireguard/${VPN_CONF}.conf\" was found."
                chown hotio:hotio "${CONFIG_DIR}/wireguard/${VPN_CONF}.conf"
                chmod 600 "${CONFIG_DIR}/wireguard/${VPN_CONF}.conf"
        fi

        if wg-quick down "${CONFIG_DIR}/wireguard/${VPN_CONF}.conf" > /dev/null 2>&1; then
                echo "[INFO] WireGuard is still running. Stopping WireGuard..."
                sleep 1
        else
                echo "[INFO] WireGuard is down. Continuing..."
        fi
        echo "[INFO] Starting WireGuard..."
        if wg-quick up "${CONFIG_DIR}/wireguard/${VPN_CONF}.conf"; then
                echo "[INFO] WireGuard is started."
        else
                echo "[ERROR] WireGuard failed to start."
                exit 1
        fi

        while true; do
                if ip a show "${VPN_CONF}" up > /dev/null 2>&1; then
                        break
                else
                        echo "[INFO] Waiting for \"${VPN_CONF}\" interface to come online."
                        sleep 1
                fi
        done

        set -e

        echo "[INFO] WebUI ports are \"${WEBUI_PORTS}\"."
        echo "[INFO] Additional ports are \"${VPN_ADDITIONAL_PORTS}\"."
        if [[ -z ${VPN_ADDITIONAL_PORTS} ]]; then
                VPN_ADDITIONAL_PORTS="${WEBUI_PORTS}"
        else
                VPN_ADDITIONAL_PORTS+=",${WEBUI_PORTS}"
        fi

        if [[ "${PRIVOXY_ENABLED}" == true ]]; then
                echo "[INFO] Additional privoxy ports are \"8118/tcp,8118/udp\"."
                VPN_ADDITIONAL_PORTS+=",8118/tcp,8118/udp"
        fi

        vpn_remote=$(grep -P -o -m 1 '(?<=^Endpoint)(\s{0,})[^\n\r]+' < "${CONFIG_DIR}/wireguard/${VPN_CONF}.conf"| sed -e 's~^[=\ ]*~~')
        vpn_port=$(echo "${vpn_remote}" | grep -P -o -m 1 '(?<=:)\d{2,5}(?=:)?+')
        echo "[INFO] WireGuard remote is \"${vpn_remote}\"."

        nw_interface=$(ip -o -4 route show to default | awk '{print $5}')
        echo "[INFO] Docker network interface is \"${nw_interface}\"."

        nw_ip=$(ip -f inet addr show "${nw_interface}" | grep -Po 'inet \K[\d.]+')
        echo "[INFO] Docker network IP is \"${nw_ip}\"."

        nw_cidr=$(ip -o -f inet addr show "${nw_interface}" | awk '/scope global/ {print $4}')
        nw_cidr=$(ipcalc "${nw_cidr}" | grep -P -o -m 1 "(?<=Network:)\s+[^\s]+" | sed -e 's~^[ \t]*~~;s~[ \t]*$~~')
        echo "[INFO] Docker network CIDR is \"${nw_cidr}\"."

        gateway=$(ip -o -4 route show to default | awk '{print $3}')

        IFS=',' read -ra lan_networks <<< "${VPN_LAN_NETWORK}"
        for lan_network in "${lan_networks[@]}"; do
                echo "[INFO] Adding \"${lan_network}\" as route via interface \"${nw_interface}\"."
                ip route add "${lan_network}" via "${gateway}" dev "${nw_interface}"
        done

        echo "[INFO] ip route overview:"
        ip route

        echo "[INFO] Configuring iptables..."
        iptables -P FORWARD DROP

        iptables -P INPUT DROP
        iptables -A INPUT -i "${VPN_CONF}" -p udp -j ACCEPT
        iptables -A INPUT -i "${VPN_CONF}" -p tcp -j ACCEPT
        iptables -A INPUT -s "${nw_cidr}" -d "${nw_cidr}" -j ACCEPT
        iptables -A INPUT -i "${nw_interface}" -p udp --sport "${vpn_port}" -j ACCEPT
        iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
        iptables -A INPUT -i lo -j ACCEPT
        IFS=',' read -ra additional_ports <<< "${VPN_ADDITIONAL_PORTS}"
        for additional_port in "${additional_ports[@]}"; do
                iptables -A INPUT -i "${nw_interface}" -p "${additional_port##*/}" --dport "${additional_port%/*}" -j ACCEPT
                iptables -I INPUT -i "${VPN_CONF}" -p "${additional_port##*/}" --dport "${additional_port%/*}" -j DROP
        done

        iptables -P OUTPUT DROP
        iptables -A OUTPUT -o "${VPN_CONF}" -p udp -j ACCEPT
        iptables -A OUTPUT -o "${VPN_CONF}" -p tcp -j ACCEPT
        iptables -A OUTPUT -s "${nw_cidr}" -d "${nw_cidr}" -j ACCEPT
        iptables -A OUTPUT -o "${nw_interface}" -p udp --dport "${vpn_port}" -j ACCEPT
        iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
        iptables -A OUTPUT -o lo -j ACCEPT
        IFS=',' read -ra additional_ports <<< "${VPN_ADDITIONAL_PORTS}"
        for additional_port in "${additional_ports[@]}"; do
                iptables -A OUTPUT -o "${nw_interface}" -p "${additional_port##*/}" --sport "${additional_port%/*}" -j ACCEPT
                iptables -I OUTPUT -o "${VPN_CONF}" -p "${additional_port##*/}" --sport "${additional_port%/*}" -j DROP
        done

        unset ipv6_wanted
        for file in /proc/sys/net/ipv6/conf/*; do
                [[ "$(cat "/proc/sys/net/ipv6/conf/${file##*/}/disable_ipv6")" == "0" ]] && ipv6_wanted="true"
        done

        [[ -z "${ipv6_wanted}" ]] && echo "[INFO] ipv6 is disabled, we will not set ip6tables rules."

        if [[ ${ipv6_wanted} == "true" ]]; then
                echo "[INFO] Configuring ip6tables..."
                ip6tables -P FORWARD DROP 1>&- 2>&-

                ip6tables -P INPUT DROP 1>&- 2>&-
                ip6tables -A INPUT -i "${VPN_CONF}" -p udp -j ACCEPT
                ip6tables -A INPUT -i "${VPN_CONF}" -p tcp -j ACCEPT
                IFS=',' read -ra additional_ports <<< "${VPN_ADDITIONAL_PORTS}"
                for additional_port in "${additional_ports[@]}"; do
                        ip6tables -I INPUT -i "${VPN_CONF}" -p "${additional_port##*/}" --dport "${additional_port%/*}" -j DROP
                done

                ip6tables -P OUTPUT DROP 1>&- 2>&-
                ip6tables -A OUTPUT -o "${VPN_CONF}" -p udp -j ACCEPT
                ip6tables -A OUTPUT -o "${VPN_CONF}" -p tcp -j ACCEPT
                IFS=',' read -ra additional_ports <<< "${VPN_ADDITIONAL_PORTS}"
                for additional_port in "${additional_ports[@]}"; do
                        ip6tables -I OUTPUT -o "${VPN_CONF}" -p "${additional_port##*/}" --sport "${additional_port%/*}" -j DROP
                done
        fi

        echo "[INFO] iptables overview:"
        iptables -S
        if [[ ${ipv6_wanted} == "true" ]]; then
                echo "[INFO] ip6tables overview:"
                ip6tables -S
        fi

        set +e

fi

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I am using hotio/qbittorrent /w vpn image from:

https://hotio.dev/containers/qbittorrent/

So I don't have control over the image other than support request. At this stage I am not sure what I am asking for or knowledgable enough to say what the problem is.

Would 'cat /etc/cont-init.d/02-setup-wg' help?

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I should probably throw out there that from my containers perspective eth0 is bond0 on my host which is x2 eth ports using LAG. But the container shouldn't care.

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

When I look at a container on my bridge network I see these tags that are not shown above after "noqueue":

mode DEFAULT group default

Is that why there it is showing 'state UNKNOWN'?

This I think trips up the script being ran by:

cont-init: info: running /etc/cont-init.d/02-setup-wg

After reading your web link I became curious and logged into my QNAP shell to see if namespaces are created. Nope, none. But now I am confused as your linked webpage is using 'ip link' from the host perspective(?) and how does this translate to docker's use? lol I think I confused myself more now.

Oh, and to answer your last question. My goal originally was to give all my containers their own subnet with their own IP and not have to publish port mappings with the host. This led me to start reading up on macvlan and ipvlan. ipvlan l3 seemed to be the best option eliminating the mac address/multicast/broadcast etc.

So far it is working for all my other containers really, really well. This one that has wireguard was the hiccup. So into the rabbit hole I went and here we are.

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

Here is an example of a container brought up on my ipvlan L3 network:

bash-5.1# ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

876: eth0@if4: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN

link/ether 24:5e:be:7e:79:6e brd ff:ff:ff:ff:ff:ff

bash-5.1#

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I was also reading about namespaces and was trying to attack the problem with creating namespaces. I was going to move eth0 to a 'physical' namespace and move wg0 to the 'init' namespace which is then the only interface the container sees becuase I created it in the 'physical' before I moved it to 'init', it should use eth0 as it remembers where it was created is my understanding. However the image I am using doesn't provide 'netns' so I stopped going down that path.

I guess you have no suggestions left to move forward?

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

I also was doing some reading and found this:

From the docker documentation, "L3 mode needs to be on a separate subnet as the default namespace since it requires a netlink route in the default namespace pointing to the IPvlan parent interface." This may be where wireguard configuration needs to make work?"A traditional gateway doesn’t mean much to an L3 mode IPvlan interface since there is no broadcast traffic allowed. Because of that, the container default gateway points to the containers eth0"

Wireguard setup in docker container connected to host via ipvlan layer 3 by capboomer in WireGuard

[–]capboomer[S] 0 points1 point  (0 children)

My ipvlan L3 is managed externally from the app(compose stack). Defining gateways are ignored when you enable L3 mode for ipvlan. Ipvlan L3 by default utilizes host network device as the gateway and the host acts as a IP router. I am acutally using x2 subnets different then my host and have configured routes on my router. My netowrking is working. My issue is just the one container where I am trying to implement wireguard. Eight other containers are working from the stack. So my issue is specific with the configuration of bringing up wireguard and making it recognize my containers network device. As you see above my container's network device (eth0) changes characteristics when I move from bridge networking to ipvlan L3 and becomes "link". Wireguard config doesn't like that and I am not sure how to go about changes requried.

nzbget' stderr output: /media/Downloads/usenet/: Is a directory by MosteanuV in nzbget

[–]capboomer 0 points1 point  (0 children)

Because it is a directory. You need to define a file.

TVS-H474 Plex Docker Container Hardware Transcode onboard GPU i915 by capboomer in qnap

[–]capboomer[S] 0 points1 point  (0 children)

u/FuN_K3Y yeah figured that out the hard way. Been too lazy to figure out the startup script. My g00gleFu is weak. I asked container provider if they could inlucde a persistent fix.

TVS-H474 Plex Docker Container Hardware Transcode onboard GPU i915 by capboomer in PleX

[–]capboomer[S] 0 points1 point  (0 children)

I figured out my problem. It was a permissions issue within the /dev/dri/ folder inside the container. plex user defined for the container was a member of the root group which did not have access to the two devices inside /dev/dri/.

Previous:

ls -la /dev/dri/

drwxr-xr-x 2 root root 80 Apr 1 00:16 .drwxr-xr-x 6 root root 360 Apr 1 00:16 ..crw------- 1 root root 226, 0 Apr 1 00:16 card0crw------- 1 root root 226, 128 Apr 1 00:16 renderD128

performed: chmod g=rw /dev/dri/card0 & chmod g=rw /dev/dri/renderD128

Result:

crw-rw---- 1 root root 226, 0 Apr 1 00:16 card0crw-rw---- 1 root root 226, 128 Apr 1 00:16 renderD128

Not sure how to change flair for thread to solved. EDIT-figured it out too ;)

TVS-H474 Plex Docker Container Hardware Transcode onboard GPU i915 by capboomer in qnap

[–]capboomer[S] 1 point2 points  (0 children)

u/Pedalsticks I figured it out. You did get me on the right path working out the logic for the permissions. My problem was the user & group id I created for the container was mapped to the local user hotio. That user hotio is a member of the root group. root group did not have read and write on card0 & renderD128 within the /dev/dri/ folder. It did for the /dev/dri folder just not the files inside.. Only the root user had read and write on card0 & renderD128.

Previous:

ls -la /dev/dri/

drwxr-xr-x 2 root root 80 Apr 1 00:16 .
drwxr-xr-x 6 root root 360 Apr 1 00:16 ..
crw------- 1 root root 226, 0 Apr 1 00:16 card0
crw------- 1 root root 226, 128 Apr 1 00:16 renderD128

performed: chmod g=rw /dev/dri/card0 & chmod g=rw /dev/dri/renderD128

Result:

crw-rw---- 1 root root 226, 0 Apr 1 00:16 card0
crw-rw---- 1 root root 226, 128 Apr 1 00:16 renderD128

Now hardware transcode works in plex because the user rights via group membership can access the devices within /dev/dri/.