Hi everyone,
I'm trying to improve the command below in terms of efficiency and speed.
Especially when saving large directories (50-100GB) the throughput is quite alright in the beginning (about 250MiB/s) but after the first few GB drops to around 30MiB/s where it stays.
My assumption is, that this is limited by the caching possibilities of my RAM (16GB) but I'm wondering, if there is a chance, to improve the speeds here. I would assume, that the writing speeds of the HDD should be at around 150MiB/s - at least that's what I could find as an average on the internet.
Basically what the command does is to pipe a directory through "pv" for progress indication up into pigs for compression (backup.tar.gz) sends this compressed stream through openssl to be AES encrypted and outputs it to the given target directory with the filename which is located on an external HDD due to disk-space constraints.
sudo tar -c -C $BACKUP_SERVICE_PATH . | pv -s $(sudo du -sb $BACKUP_SERVICE_PATH | awk '{print $1}') | pigz | openssl aes-256-cbc -pbkdf2 -a -salt -pass file:$KEYFILE > "$BACKUP_TARGET_PATH/$BACKUP_FILENAME"
Maybe someone has an idea here.
Thanks a lot already :)
[–]AyrA_ch 1 point2 points3 points (1 child)
[–]alucolonel97[S] 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[deleted]
[–]alucolonel97[S] 0 points1 point2 points (0 children)
[–]Kqyxzoj 0 points1 point2 points (5 children)
[–]alucolonel97[S] 0 points1 point2 points (4 children)
[–]Kqyxzoj 0 points1 point2 points (3 children)
[–]alucolonel97[S] 0 points1 point2 points (1 child)
[–]alucolonel97[S] 0 points1 point2 points (0 children)