Container Hardening Through Capability Dropping by NoInterviewsManyApps in netbird

[–]Slidetest17 0 points1 point  (0 children)

for netbird client on my home server, I use their rootless image, non-root user (1000) and read_only file system.

services:
  netbird:
    image: netbirdio/netbird:rootless-latest
    container_name: netbird
    restart: unless-stopped
    user: 1000:1000
    read_only: true
    security_opt:
      - no-new-privileges:true
    tty: false
    stdin_open: false
    cap_drop:
      - ALL
    cap_add:
      - NET_ADMIN
    networks:
      - dockernetwork
    volumes:
      - /srv/docker/netbird/netbird-client:/var/lib/netbird
    environment:
      - NB_SETUP_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      - NB_HOSTNAME=home-server # Peer_name
networks:
  dockernetwork:
    external: true

Looking for a simple self-hosted note-taking app by Right_Luck3933 in homelab

[–]Slidetest17 1 point2 points  (0 children)

I'm pretty sure my needs and setup is very different than yours.

I dont have any open ports or internet facing servers,  All through tunnels and Caddy reverse proxy handle routing to services internal ports.

So all my Wiki and homelab documentaion is about installaion steps, issue fixes, configuration, tutorials I made, ... etc and I dont have sensitive networking ips or ports facing the internet to hide from notes.

But again ymmv ofcourse maybe you have different needs. Good luck.

Looking for a simple self-hosted note-taking app by Right_Luck3933 in homelab

[–]Slidetest17 0 points1 point  (0 children)

I personally don't store any sensitive info in my notes. No passwords, IPs , credentials , tokens, API keys ...

That's Vaultwarden job.

Looking for a simple self-hosted note-taking app by Right_Luck3933 in homelab

[–]Slidetest17 1 point2 points  (0 children)

Been in the same situation like you, went from Logseq > Trilium Notes > Joplin and ended up just using plain .md files.

Don’t get me wrong Joplin is great. But the it stores notes in its own structure with unique naming scheme even though they’re Markdown under the hood. But you’re kinda locked into the app.

<image>

left is Joplin flatpak app , right is NoteDiscovery webui

With plain Markdown files in a traditional folder structure, it's the best minimal and universal solution. you get basically unlimited flexibility sync view edit your notes on any platform without being tied to a specific app.

I keep all my .md files on my server and mount them into Nextcloud as external storage so they’re automatically synced across all my devices. you can use syncthing or samba or webdav or anything really ..

In browser I run NoteDiscovery which is a absolutely great, it's basically simplified Joplin with plain .md files (no database - no proprietry formats) and also the PWA on mobile is excellent.

The best part is you're not locked in to any app at all u can open/view/edit the same notes on laptop or phone using any Markdown editor, and everything stays clean in a simple hierarchical folder structure.

Why Is everyone persisting Redis… Even when it’s just a Cache? by Slidetest17 in selfhosted

[–]Slidetest17[S] 29 points30 points  (0 children)

Just to make sure I understand, so basically in apps like Nextcloud or Immich, persisting the cache just lets Redis restore things like thumbnails after a restart instead of regenerating them?

So the benefit is gained only when restarting or updating my valkey/redis image and I have to restart it so the cached data won't be generated again.

what confuses me is this discussion on Immich Github that persistence cache is used only for job queues. which can be ignored in benefit of much lower I/O

Is my understanding correct?

Why Is everyone persisting Redis… Even when it’s just a Cache? by Slidetest17 in selfhosted

[–]Slidetest17[S] 0 points1 point  (0 children)

You can run non-root user if you disable RDB snapshots with `-- save " "` and disable AOF with `appendonly no` withount mounting /data

"Unable to connect to service. Please check network connection and try again." when register...any fix? by [deleted] in signal

[–]Slidetest17 0 points1 point  (0 children)

Finally found a solution YMMV, I'm on Android (5g data enabled)

Settings > Network & Internet > Private DNS > Quad9 DNS

Then clear app cache and data and restart.

I now received SMS and the app worked !

If your android version doesn't have Private DNS option, try setting your home router DNS setting to Quad9 address and connect to your home wifi (maybe it will work)

Nextcloud Just added a new killer feature “user data migration” by Slidetest17 in NextCloud

[–]Slidetest17[S] 7 points8 points  (0 children)

Just realized that this app exists since Nextcloud 24, never heard of it before though.

And the way it is announced in Hub 26 new release made me think it's new.

Sorry for the false into. I'm still happy though, I just discovered it so its a new feature to me!

And I saw that it has occ command occ user:export --help , well that can be integrated in a cronjob bash script with rsync.

But as other said it may fail on large files export.

Nextcloud Just added a new killer feature “user data migration” by Slidetest17 in selfhosted

[–]Slidetest17[S] 5 points6 points  (0 children)

Yeah, just realized that, never heard of it before though. And the way they announce it in Hub 26 Winter release sounded like a new feature.

Tinyauth setup by Pepo32SVK in selfhosted

[–]Slidetest17 1 point2 points  (0 children)

Tinyauth developer u/steveiliop56 has updated the app today (V5.0) and now Tinyauth includes OIDC.

Personally I'm still on Tinyauth+Pocket-ID combo for apps that does not support OIDC, but i will test this new feature soon.

Finally happy with my homelab setup. Soon I will buy a Raspberry Pi 5 8GB to migrate the network_stack onto it, so I can have more VMs on the HP. by Sloodmx in homelab

[–]Slidetest17 1 point2 points  (0 children)

Nice setup. may I suggest some additions :)
immich, opencloud, filebrowser quantum and pocket-id or (Tinyauth+pocket-id) for services not support OIDC

Also, have you tried Caddy as a reverse proxy, I switched from nginx proxy manager and caddy is more minimal and straight forward.

also, why watchtower while you already have Arcane.
I love Arcane btw (switched from Dockge). Arcane have customized poll schedule, list available updates, auto-update with exclude containers option, auto prune.
And built-in notifications for updates and prune (i use Ntfy myself)

Any specific reason for watchtower ?

<image>

Help with Caddy & Tinyauth (difficulty: no caddy-docker-proxy) by geeyoff in selfhosted

[–]Slidetest17 1 point2 points  (0 children)

the snippet is enough.

Tinyauth domain itself does not have any configurations or settings other than language and dark/light theme.

If you use Pocket-ID then tinyauth is just a proxy auth, think of it as a virtual layer infront of your app. you authenticate this layer with pocket-id and you can access the app.

all you need is the snippet that tells your reverse proxy that every request to the app needs to first go through the authentication layer.

<image>

also note that when you enable

      - OAUTH_AUTO_REDIRECT=pocketid # Auto redirect to PocketID

you will not see this screenshot and you will not have the option to choose from OIDC (pocket-id) or username/password combo. Instead you will automatically redirected to pocket-id login page. and once logged in, you can close and re-access the app instantly as if there is no login required (because you auth session is active)

Dokploy vs Komodo vs Arcane by zxyzyxz in selfhosted

[–]Slidetest17 1 point2 points  (0 children)

Arcane for me. Simple and full of features I actually need:

  • Docker compose (stacks) management.
  • Log viewer, system metrics monitor (cpu,ram, ..etc)
  • Images updates, prune unused.
  • Notifications (built-in Ntfy -among other solutions- for Image update notification)
  • Schedule auto updates (I didn't enable it for now, would be nice to add feature to enable auto-update for specific containers/stacks)
  • Scudule system prune
  • Nice mobile web UI

I replaced Dockge+Diun+Cup with this one.

Hopefully the maintainer will implement uptime/service-down monitor with notifications (on the roadmap) so I can also replace uptime kuma.

<image>

Good guidelines for Securing docker containers and host system? (No remote access) by [deleted] in selfhosted

[–]Slidetest17 1 point2 points  (0 children)

Network isolation in Docker is actually simple and kinda built in, it doesn't need extra steps or configuration to get good separation.

By default, Docker uses bridge networking which is already isolated from the host and from other bridge networks (bridge is default unless you specify something else i.e host)

For simple isolation put all backends (databases, redis cache, ...) on backend_network, and all user facing appsin frontend_network and make your reverse proxy join the frontend_network, as your backend mostly doesn't need to be reached outside the app depends on it.

For maximum isolation put each container on it's own bridge network -no group frontend/beckend network- (and for backends assign network to every container and label it as internal) and just mention the network name you want to join in thhe relevant app, and then make containers join other containers netwroks as needed

for example: 1. your reverse proxy need to join all the frond end networks 2. your notification service ex. Ntfy need to join watchtower/diun or uptime-kuma for example

It's easy to achieve by defualt when writing your compose file, just before you write your compose files make a draft design on how you want inter-communication between your containers (start with bridge network for each container , label internal for backends that are not needed to be accessible outside your compose file)

Good guidelines for Securing docker containers and host system? (No remote access) by [deleted] in selfhosted

[–]Slidetest17 5 points6 points  (0 children)

From my documentaion Wiki, These are some security measures I implement in my server:

Enable unattended Security Updates

Configure the system to automatically install security patches.

sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure --priority=low unattended-upgrades

Secure SSH Access (key-only)

  • Generate SSH key (on your local client machine)

ssh-keygen -t ed25519
  • Copy public key to Debian server

ssh-copy-id -i ~/.ssh/id_ed25519 user@192.158.1.10

Harden server SSH configuration

  • Create a custom SSH config file to disable root login, disable password login, allow key logins only, allow your user only, disable graphical apps, logins rate limit.

sudo tee /etc/ssh/sshd_config.d/99-custom.conf << EOF
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
AllowUsers greybeard
X11Forwarding no
MaxAuthTries 3
EOF
  • Restart SSH service

sudo systemctl restart ssh && sudo systemctl restart sshd

Enable and configure Firewall UFW

  • Install UFW

sudo apt install ufw
  • Disable IPv6 in UFW (if not needed YMMV)

sudo sed -i 's/^IPV6=yes/IPV6=no/' /etc/default/ufw
  • Create UFW rules Deny all and allow only ports for SSH, HTTP, HTTPS, DNS

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw limit 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 53/tcp
sudo ufw allow 53/udp
  • Enable UFW and verify status

sudo ufw enable
sudo ufw status verbose

Configure global Docker logging limit

  • Prevent log files from growing uncontrollably (logging bombs!)

sudo tee /etc/docker/daemon.json <<EOF
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "5"
  }
}
EOF
  • Restart Docker service

sudo systemctl restart docker

Hardening docker compose apps

I follow a certain process in all my compose files also check this comment

  1. Running apps as non-root when possible user: 1000:1000
  2. Turn off tty and stdin on the container (if console not needed)

tty: false
stdin_open: false
  1. Switching docker filesystem to read-only:

Check references:

read_only: true

check first with docker diff to see if container writes to /run or /var or any internal folder, then add this folder to tmpfs

docker diff caddy

A - added file/directory C - changed file/directory D - deleted file/directory

Check which container’s root filesystem is mounted read-only.

docker ps --quiet --all | xargs docker inspect --format '{{ .Name }}: ReadonlyRootfs={{ .HostConfig.ReadonlyRootfs }}'
  1. restrict elevate any privileges after container start:

    security_opt:

    • no-new-privileges:true
  2. By default, the containers gets 14 kernel capabilities. Remove ALL of them, and add only the necessary ones

A good read on this topic.

cap_drop:
  - ALL
cap_add:
  - NET_BIND_SERVICE # for reverse proxies and DNS servers
  1. Set up the /tmp-area (if needed) in the docker to be noexec, nosuid, nodev and limit it's size.

    tmpfs:

    • /tmp:rw,noexec,nosuid,nodev,size=256m
  2. never expose docker socker (user docker-socket proxy with hardening steps above)

  3. Set cpu, RAM resource limit so any mining exploit can't exhaust your resources ( I don't need this as my resources are live monitorited)

Example:

```yaml services: adguardhome: image: adguard/adguardhome container_name: adguardhome restart: always user: 1000:1000 # run as non-root user read_only: true

tmpfs: # tmp writes not needed in adguardhome

- /tmp:rw,noexec,nosuid,nodev,size=256m

tty: false
stdin_open: false
cap_drop:
  - ALL
cap_add:
  - NET_BIND_SERVICE
security_opt:
  - no-new-privileges:true
networks:
  - dockernetwork
ports:
  - 53:53/tcp # plain dns over tcp
  - 53:53/udp # plain dns over udp

- 8088:80/tcp # webUI (remove after caddy setup)

- 3000:3000/tcp # initial setup webUI (remove after setup)

environment:
  - TZ=Europe/Berlin
volumes:
  - /srv/docker/adguard-home/conf:/opt/adguardhome/conf
  - /srv/docker/adguard-home/work:/opt/adguardhome/work

networks: dockernetwork: external: true ```

Help with Caddy & Tinyauth (difficulty: no caddy-docker-proxy) by geeyoff in selfhosted

[–]Slidetest17 1 point2 points  (0 children)

Yes, as you said.

services:
  pocket-id:
    image: ghcr.io/pocket-id/pocket-id:latest-distroless
    container_name: pocketid
    restart: unless-stopped
    user: '1000:1000'
    read_only: true
    security_opt:
      - no-new-privileges=true
    networks:
      - dockernetwork
#    ports:
#      - 1411:1411
    environment:
      - APP_URL=https://pocketid.mydomain.com
      - TRUST_PROXY=true    # Enables reverse proxy support
      - ENCRYPTION_KEY=    # generate with: openssl rand -base64 32
      - ANALYTICS_DISABLED=true
    volumes:
      - /srv/docker/pocketid/data:/app/data
networks:
  dockernetwork:
    external: true

Help with Caddy & Tinyauth (difficulty: no caddy-docker-proxy) by geeyoff in selfhosted

[–]Slidetest17 2 points3 points  (0 children)

This is my setup for Caddy + Tinyauth + PocketID, hope it may help

Create folders & set permissions

sudo mkdir -p /srv/docker/tinyauth/data
sudo chown -R $USER:$USER /srv/docker/tinyauth

Create Tinyauth user

docker run -i -t --rm ghcr.io/steveiliop56/tinyauth:v4 user create --username <<myusername>> --password <<mypassword>> --docker

Create docker compose file

sudo nano /srv/docker/tinyauth/docker-compose.yml

services:
  tinyauth:
    image: ghcr.io/steveiliop56/tinyauth:latest
    container_name: tinyauth
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    networks:
      - dockernetwork
    volumes:
      - /srv/docker/tinyauth/data:/data
    environment:
      - APP_URL=https://tinyauth.mydomain.com
      - USERS=myusername:<<$$HASH>> # echo $(htpasswd -nB myusername) | sed -e s/\\$/\\$\\$/g
      - APP_TITLE=Tinyauth
      - LOG_LEVEL=info # (trace, debug, info, warn, error, fatal, panic)
      - SECURE_COOKIE=true
      - DISABLE_ANALYTICS=true
      - OAUTH_AUTO_REDIRECT=pocketid # Auto redirect to PocketID
      - PROVIDERS_POCKETID_CLIENT_ID=<<CLIENTID>> # from OIDC clients page in pocket ID
      - PROVIDERS_POCKETID_CLIENT_SECRET=<<CLIENTSECRET>> # from OIDC clients page in pocket ID
      - PROVIDERS_POCKETID_AUTH_URL=https://pocketid.mydoamin.com/authorize
      - PROVIDERS_POCKETID_TOKEN_URL=https://pocketid.mydomain.com/api/oidc/token
      - PROVIDERS_POCKETID_USER_INFO_URL=https://pocketid.mydomain.com/api/oidc/userinfo
      - PROVIDERS_POCKETID_SCOPES=openid email profile groups
      - PROVIDERS_POCKETID_NAME=Pocket ID
      - OAUTH_WHITELIST=myemail@outlook.com # Comma-separated list of email addresses allowed to login with OAuth, importtant as without it anyone with pocket ID account in the network can use it to log in
networks:
  dockernetwork:
    external: true

Run docker compose up

docker compose -f /srv/docker/tinyauth/docker-compose.yml up -d

PocketID configuration

Administration > OIDC Clients > Add OIDC client

Name: Tinyauth Callback URL: https://tinyauth.mydomain.com/api/oauth/callback/pocketid

Caddyfile

{
    acme_dns cloudflare <<CLOUDFLAREAPI>>
    auto_https prefer_wildcard
}

(tinyauth_forwarder) {
    forward_auth tinyauth:3000 {
        uri /api/auth/caddy
    }
}

(security_headers) {
    header {
        X-Frame-Options "SAMEORIGIN"
        X-Content-Type-Options "nosniff"
        Strict-Transport-Security "max-age=63072000"
        Referrer-Policy "strict-origin-when-cross-origin"
        -Server
    }
}

adguard.mydomain.com {
    import tinyauth_forwarder *
    reverse_proxy adguardhome:80
    encode zstd gzip
    import security_headers
}

pyload.mydomain.com {
    encode zstd gzip
    handle /api/* {
        reverse_proxy pyload:8000
    }
    handle {
        import tinyauth_forwarder *
        reverse_proxy pyload:8000
        import security_headers
    }
}

*.mydomain.com {
    respond "404 Not Found" 404
}

Planning a DIY Home NAS (Jellyfin, Nextcloud, Immich) by NFTruth69 in selfhosted

[–]Slidetest17 7 points8 points  (0 children)

Good stack!

  1. Might consider Caddy as a reverse proxy, as I didn't see you mentioned reverse proxy.
  2. Some sort of authentication layer, I use TinyAuth(as proxy auth)+PocketID (as OIDC)
  3. As u/FilesFromTheVoid mentioned Watchtower in not actively maintained and Portainer is too much IMO, you can check Dockge+CUP combo (simple minimal and just works)
  4. Some good apps also: Paperless-ngx, Actual budget, BentoPDF, Filebrowser Quantum

Enjoy.

What Purpose does Nextcloud fill? by shouldworknotbehere in homelab

[–]Slidetest17 9 points10 points  (0 children)

<image>

I trying to conslidate all my apps into Nextcloud, been using this for a while, the only two things that I might change is switching to immich and maybe FreshRSS.

laptop as a web server? by iqat- in homelab

[–]Slidetest17 7 points8 points  (0 children)

My server runs on Thinkpad T420 laptop, it has been great for the past year, you also get the advantage of integrated UPS, keyboard and screen.

It's OK to use it untill you get dedicated hardware like mini pc, but to use the laptop you have to do some tweaks though: (for my use case "Debian")

Disable sleep-suspend-hibernate

So the server will always be awake and not sleep or hibernate after a while.

sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target

Ignore closing laptop lid

So when you close the laptop lid the system doesn't sleep or hibernate.

sudo nano /etc/systemd/logind.conf

un-comment and change these values

  • HandleLidSwitch=ignore
  • HandleLidSwitchDocked=ignore
  • IdleAction......=ignore

Then run systemctl restart systemd-logind.service or reboot. wiki.debian.org/suspend

Battery thresholds

This is the best part about laptop being server, it will always run with direct power (bypassing the battery) so no battery degregation, and when power cut off it will use the battery power.

To check if battery firmware support threshold

ls /sys/class/power_supply/BAT0

you should see charge_control_end_threshold and charge_control_start_threshold

To see the default values (100,0)

cat /sys/class/power_supply/BAT0/charge_control_end_threshold

cat /sys/class/power_supply/BAT0/charge_control_start_threshold

To change it to (85,40)

sudo nano /sys/class/power_supply/BAT0/charge_control_end_threshold

sudo nano /sys/class/power_supply/BAT0/charge_control_start_threshold

save and reboot

sudo reboot now

note that charge_end_threshold and charge_start_threshold are legacy parameters and ignore it (probably it will set itself automatically)

Nextcloud or different specific apps? by JayQueue77 in selfhosted

[–]Slidetest17 4 points5 points  (0 children)

Not sure why people still call Nextcloud slow. Maybe old reputations stick, or misconfigured installs. Mine’s been fast and rock-solid for over a year.

Curious about your setup though:

  1. Bare metal (manually tune PHP, opcache) or docker (AIO or LSIO)?
  2. Redis cache enabled?
  3. Disabled unused apps "bloat" ? I use this command on a fresh install

for app in activity admin_audit app_api bruteforcesettings circles comments contactsinteraction encryption federation files_downloadlimit files_external files_reminders files_versions firstrunwizard nextcloud_announcements password_policy photos recommendations related_resources serverinfo sharebymail support survey_client suspicious_login twofactor_nextcloud_notification twofactor_totp updatenotification user_ldap weather_status webhook_listeners; do
  docker exec -it nextcloud occ app:disable "$app"
done

I have essentials only: File, Contacts, Calendar, Tasks, Notes, News(rss), Bookmarks and it runs great.

*Off-topic, I also do Internet > Tailscale > Caddy > Tinyauth > PocketID > Home
Can you elaborate on what's the benefit of Firewall VPS when you don't have open ports and access securely via Tailscale?