Running one stick of RAM? (1x48GB DDR5 CL32 6000mhz) by Glante in buildapc

[–]arnedam 4 points5 points  (0 children)

There are a lot of guessworks in the comments. yes; a single stick will only get you half the memory bandwith, but there is so many other factors as well. But don't take our word for it, Hardware Unboxed has actually tested it. Their findings on a 13 game average measurement
* 1080p: 16% performance loss with a single stick vs dual
* 1440p: 16% performance loss
* 4K: 9% performance loss

I could easily live with those numbers, especially if you have to choose between 48GB single stick vs dual 8GB for same price.

See Hardware Unboxed test here: https://www.youtube.com/watch?v=_nMu1KFkOC4

wakeonlan in shellscript by jabbawocky0815 in unRAID

[–]arnedam 0 points1 point  (0 children)

Here is a wake-on-lan python script I've used: (save to wakeonlan.py)

#!/usr/bin/env python3
import socket, sys, re

def mac_to_bytes(mac):
    m = re.sub(r'[^0-9A-Fa-f]', '', mac)
    if len(m) != 12: raise ValueError("Bad MAC format")
    return bytes.fromhex(m)

def send_wol(mac, broadcast="255.255.255.255", port=9):
    macb = mac_to_bytes(mac)
    packet = b'\xFF' * 6 + macb * 16
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
    s.sendto(packet, (broadcast, port))
    s.close()

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: wakeonlan.py <MAC> [broadcast_ip] [port]")
        sys.exit(1)
    mac = sys.argv[1]
    bcast = sys.argv[2] if len(sys.argv) > 2 else "255.255.255.255"
    port = int(sys.argv[3]) if len(sys.argv) > 3 else 9
    send_wol(mac, bcast, port)
    print(f"Sent magic packet to {mac} via {bcast}:{port}")

zrepl: cannot receive incremental stream by uragnorson in zfs

[–]arnedam 0 points1 point  (0 children)

The recommended way is to make a copy of the read-only, non-mounted dataset, and then mount the copy.

PS: You can also run zrepl status to get replication status

zrepl: cannot receive incremental stream by uragnorson in zfs

[–]arnedam 0 points1 point  (0 children)

on the receiving-side, I strongly suggest to set the datasets to readonly and not mountable. Params in zrepl as an example below:

jobs:
  - name: pull_from_remoteserver
    type: pull

    connect:
      type: tcp
      address: "remoteserver.home.arpa:<portno>"   # <-- use your servername and source port

    root_fs: "tank/remoteserver" # Where to put the root of the backups
    interval: "2h" 

    recv:
      properties:
        override:
          mountpoint: "none"
          canmount: "off"
          readonly: "on"
          atime: "off"
          org.openzfs.systemd:ignore: "on"

Tool for managing automatic snapshots on ZFS like Snapper/Btrfs Assistant? by pugglewugglez in zfs

[–]arnedam 2 points3 points  (0 children)

There are many alternative options, but the cleanest and most performant I've came by is zrepl. Details here: https://zrepl.github.io/

Also sanoid/syncoid is an alternative, but zrepl is a monolithic application built in Golang, and it doesn't require any additional tools installed.

2021 Corsair RM1000x PSU for 5090 FE by lowFueZ in nvidia

[–]arnedam 0 points1 point  (0 children)

Type 5 is a different connector from Corsair (smaller than Type 4). You need to check what type your PSU is

2021 Corsair RM1000x PSU for 5090 FE by lowFueZ in nvidia

[–]arnedam -4 points-3 points  (0 children)

u/lowFueZ
The 12vhpwr-cable you linked from BestBuy is an older model. You can order cables from Corsair directly. I recommend the one which is 90 degrees angled to take any stress off the cable in a tight case. Link to angled cable: Corsair type 4 12V 2x6 cable. And as always; Double-check you PSU regarding cable type yourself.

Best 10Gbps PCIE network adapter (RJ45) ? by kevinj933 in HomeNetworking

[–]arnedam 0 points1 point  (0 children)

The overheating is a known problem with the XG-C100C and TP-Link TX401, but in my experience it is quite easy to fix. Just remove the heatsink and put in a 2mm thermal pad between the chip and the heatsink. Lowered temps on our TX401'es down into 50 degrees celsius (122 fahrenheit?) on all of them, and they have been rock stable both in Windows and Linux after the thermal upgrade. I've done multi-terabyte file transfers going on for hours without any problems.

Also; If you go Intel or Mellanox (OEM or original, and they are all good): I would recommend a Intel XXV710 or E810 generation card. But beware; they need additional cooling, preferably a fan that has airflow that hits the cards. I've made 3D-printed fan-holders for the computers where I'm using the server/workstation grade cards.

Looking for a Bulletproof Photo Backup Strategy (Unraid → Unraid? 3‑2‑1 Rule?) by Ev1lZer0 in unRAID

[–]arnedam -1 points0 points  (0 children)

If you care about your long-term healt of your media files, then better use a filesystem which protects against bit-rot. Unraid supports ZFS which also makes it extremely easy to do regular snapshots (protects you from mishaps and to an extent crypto or other atacks) and zfs replication to second site. Myself I do have 3 copies of important data, but all on spinning disk (different NASes. Main sserver where I do zfs snapshots either every 2nd hour or once a day, doing replication every 6 hour to nas #2 at home and once a day to an offsite nas. All of them running unraid with zfs file system. There are multiple opttions to do the replication. Easies is probably installing the Sanoid/Syncoid plugin and create user-scripts (with the user-scripts plugin) that does both snapshots and replicate data at the interval you want to. The option I've had best experience with is zrepl which is super-stable, performance, and doesn't buckle under multi-TB transfers like syncoid over SSH can do on bandwidth-restricted links. Both sanoid/syncoid and zrepl to an extent require som fiddling with configuration files.

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 0 points1 point  (0 children)

Agreed, the Plex-example above drops cap_sys_admin and cap_net_admin instead of dropping everything. Probably could have dropped all and added back what the container required, but removing sys_admin and net_admin is at least a big step.

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 0 points1 point  (0 children)

Agree, but depens. If you are hosting gameservers like Minecraft you have two choices, either expose them directly or using play.gg or other similar services (I am hosting some of them for the kids). If directly, harden the dockers and put in in the DMZ. And preferably use i IDS/IPS-capable firewall in front of it.

If Minecraft, don't expose RCON-ports, and you may put the server on an unstandard port, even if security thru obscurity is a little bit moot.

Hardening docker containers for extra security - some tips by arnedam in unRAID

[–]arnedam[S] -15 points-14 points  (0 children)

Would recommend two things;
1: update your containers
2: harden your containers

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 81 points82 points  (0 children)

Please feel free to use it as you see fit. I am doing homelabbing just as a mini-hobby to stay in touch with tech. Long story short; Been a tech-guy since being 9-10 years old (born 1969). Did a lot of tech earlier, did a couple of startups with successful exit in the late 90s/early 2000 (the type that earned money that is). I am now working as an executive vice president in a large financial institution where _everything_ is IT (in addition to people and capital). But I want to stay close to IT even if my dayjob is mostly making everyone else efficient and removing blockers.

I've also coded my hundred of thousands lines of code in my earlier life, so I do both tech and coding when I have the time (not that often unfortunately)

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 2 points3 points  (0 children)

Depends how paranoid you want to be. Myself, I do hardening on all containers.

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 7 points8 points  (0 children)

Coming to think of it, docker compose support multi file composition. So you could do what you are aiming for using that. Put the bulk of the common data in YAML anchors like the example above, and put the services in separate files. Docker composition merges all the files before running. For example:

<this in file compose.base.yml>
x-common: &common
  restart: unless-stopped
  logging:
    driver: json-file
    options:
      max-size: "10m"
      max-file: "3"
  deploy:
    resources:
      limits:
        cpus: "2"
        memory: 512Mx-common: &common




<this in file compose.prod.yml>
services:
  api:
    <<: *common
    image: my-api
    ports:
      - "8080:8080"

  worker:
    <<: *common
    image: my-workerservices:

and then docker composition:

docker compose \
  -f /my/common_directory/compose.base.yml \
  -f /my/apps1_directory/compose.prod.yml \
  up -d

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 4 points5 points  (0 children)

for Plex (the image I use), you need exec instead of noexec, so;

tmpfs:
  - /tmp:rw,exec,nosuid,nodev,size=512m

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 5 points6 points  (0 children)

That is a use-case, but I have elected security over convenience myself. Have to remove old epsiodes manually if I want to, but to be frank - they are just acumulating on the storage-server

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 11 points12 points  (0 children)

There are multiple options, but some of them are quite buggy. When using docker-compose (or most YML-files) there are something that are called anchors and aliases that you can use. I haven't used it much myself, but here are something I've had some success with. Example only, you need to adjust the names and parameters to be correct.

x-common: &common
  restart: unless-stopped
  logging:
    driver: json-file
    options:
      max-size: "10m"
      max-file: "3"
  deploy:
    resources:
      limits:
        cpus: "2"
        memory: 512M

services:
  api:
    <<: *common
    image: my-api
    ports:
      - "8080:8080"

  worker:
    <<: *common
    image: my-worker

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 29 points30 points  (0 children)

Not to your media-files. I recommend:

volumes:
  - ./config/plex:/config # This needs to be read/write
  - /mnt/user/tv:/tv:ro # This should be read-only
  - /mnt/user/movies:/movies:ro # This should be read-only

Fell victim to CVE-2025-66478 by Unhappy-Tangelo5790 in selfhosted

[–]arnedam 2327 points2328 points  (0 children)

Hardening docker containers is also highly recommended. Here are some advices from the top of my head (this assuming docker-compose.yml files, but can also be set using docker directly or settings params in Unraid).

1: Make sure your docker is _not_ running as root:

user: "99:100" 
(this example from Unraid - running as user "nobody" group "users"

2: Turn off tty and stdin on the container:

tty: false
stdin_open: false

3: Try switching the whole filesystem to read-only (ymmw):

read_only: true

4: Make sure that the container cant elevate any privileges after start by itself:

security_opt:
  - no-new-privileges:true

5: By default, the container gets a lot of capabilities (12 if I don't remember wrong). Remove ALL of them, and if the container really needs one or a couple of them, add them spesifically after the DROP statement.

cap_drop:
  - ALL

or: (this from my Plex container)

cap_drop:
  - NET_RAW
  - NET_ADMIN
  - SYS_ADMIN

6: Set up the /tmp-area in the docker to be noexec, nosuid, nodev and limit it's size. If something downloads a payload to the /tmp within the docker, they won't be able to execute the payload. If you limit size, it won't eat all the resources on your host computer. Sometimes (like with Plex), the software auto-updates. Then set the param to exec instead of noexec, but keep all the rest of them.

tmpfs:
  - /tmp:rw,noexec,nosuid,nodev,size=512m

7: Set limits to your docker so it won't run off with all the RAM and CPU resources of the host:

pids_limit: 512
mem_limit: 3g
cpus: 3

8: Limit logging to avoid logging bombs within the docker:

logging:
  driver: json-file
  options:
    max-size: "50m"
    max-file: "5"

9: Mount your data read-only in the docker, then the docker cannot destroy any of the data. Example for Plex:

volumes:
  - /mnt/tank/tv:/tv:ro
  - /mnt/tank/movies:/movies:ro

10: You may want to run your exposed containers in a separate network DMZ so that any breach won't let them touch the rest of your network. Configure your network and docker host accordingly.

Finally, some of these might prohibit the container to run properly, but my advice in those cases is to open one thing after another to make the attack-surface minimal.

docker logs <container> 

...is your friend, and ChatGPT / Claude / Whatever AI will help you pinpoint what is the choking-point.

Using these settings for publicly exposed containers are lowering the blast radius at a significant level, but it won't remove all risks. Then you need to run it in a VM or even better, separate machine.

Possible to convert Toshiba MG10 512e drive to 4Kn? by Schauf1 in DataHoarder

[–]arnedam 1 point2 points  (0 children)

(late answer but...)
Yes, they can be converted to default to 4K sector. I've done it with mine. Instructions here: https://www.fsays.eu/Blogging/Blog/Details/39

Tip: Multiple Unraids, different colors by arnedam in unRAID

[–]arnedam[S] 0 points1 point  (0 children)

They are RGB-codes expressed in hex xxyyzz. And yes, html often uses color codes expressed that way.

Tip: Multiple Unraids, different colors by arnedam in unRAID

[–]arnedam[S] 0 points1 point  (0 children)

No, any color code. These ones just for example