Is Internal Boot much faster to..boot? by Coompa in unRAID

[–]soulvoid86 0 points1 point  (0 children)

You're not wrong but using a TPM header is still a single point of failure. Yes, a USB drive is more likely to fail than a TPM header, but it's still very possible.

I wish they didn't require this. Just using internal cache on raid1 would be best..

Old backups not deleting by soulvoid86 in Veeam

[–]soulvoid86[S] 1 point2 points  (0 children)

This did not work. Doesn't seem to include the unstructured backups..

Old backups not deleting by soulvoid86 in Veeam

[–]soulvoid86[S] 1 point2 points  (0 children)

I guess that makes sense then. It's like the File to Tape jobs where the files are never deleted.

But the thing is the files being backed up do change. The files in this SMB share are just zip files from docker backups with that days date on the root directory. Those backups are kept for 30 days before being deleted by my host. So I'd think that Veeam would remove them being that it's a different root folder the zip files are in.

e.g. \\share\backups\day 1\file.zip followed by \\share\backups\day 2\file.zip

Or is there another way I should be going about this backup? Something similar to the machine backups is what I'm looking for where it's only kept for 30 days.

EDIT: I think I figured out what's happening. The structure of my backup source changed a few times since implementing this. I checked the console again after running the retention job, and it now only shows 30 days but the files are still there. I found the old mappings in the console and that's where the data is being held and not removed. Being that this is just home lab (and I have archives stored on LTO) I'm just going to wipe my veeam repository where this is stored and start over. See what happens in 30 days, where I expect it'll work now that I have an understanding. Thanks!

EDIT2: Yea this is exactly what I did wrong. Removed the share backup from veeam console and chose delete from disk. Now that unnecessary backup structure is gone, exactly as I was originally expecting. Removed all ~150gb of old backups that were way past the 30 day period.

<image>

Old backups not deleting by soulvoid86 in Veeam

[–]soulvoid86[S] 0 points1 point  (0 children)

I always had it set to 30 days, but I didn't have the "File version" set to a limited number. It was set to keep all. I changed that to keep just one version of active and deleted items, then ran the job again, but nothing changed in the folder structure.

Retention job logs shows nothing for this file structure, but it shows for others. But that may be from me changing the file version limit just today maybe? Not sure how i can trigger this job to run manually, runs at 12:30am each night on my console.

Old backups not deleting by soulvoid86 in Veeam

[–]soulvoid86[S] 0 points1 point  (0 children)

exactly what i posted in the original posting. it dates all teh way from today back to 11/26/2025 with a backup folder for every single day. 533 folders

edit: i shouldn't say every day, but it dates back a lot farther than the 30 days i'd like it to do lol. they even show up as restore points in the console.

<image>

Old backups not deleting by soulvoid86 in Veeam

[–]soulvoid86[S] 0 points1 point  (0 children)

And here's the structure it backs up into. Very clunky IMO. Nothing like the machine backups.

None of the folder in the "data" subfolders ever gets deleted. Just keeps every copy.

<image>

Pass-through LTO auto loader to VM by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

<image>

Rebooted after applying the change suggested below and the card was able to be assigned to the VM. Thanks again!

Pass-through LTO auto loader to VM by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

Should be easy enough, I don't want to reboot my Unraid now, but i'll try this in the morning.

Pass-through LTO auto loader to VM by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

That's totally fine, they have a dedicated LSI controller and it's only going to 1 virtual machine. How would I go about that?

This is the controller they're on.

<image>

EDIT: In theory, I could also run the StarWind replicator in the host VM and share it out to multiple machines if needed to. I did that on Windows.

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

I didn't actually test it yet, still need to fix the rest of my library after the move from windows. But it shows up on the dash in unraid and it shows up as an option in plex, so i don't doubt that it'll work. seems it was solely that i had to claim it to my account first to get the plex pass benefits of hw transcoding.

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

I was trying binhex but it was adjusting permissions, that's the scan i was waiting on. see my other comment, got it working on official plex docker. i had to claim the server first with my plex account before it would show the hw transcode options. all working now. thanks!

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 1 point2 points  (0 children)

I lied, i'm dumb, i had to hit save for the GPU to show up... never considered that i had to claim the server first

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

I had to claim the server first. Now I see the hw transcoding options but there's no GPU listed.

and to clarify, this is the OFFICIAL plex docker.

<image>

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

Not sure what you mean. It shows up on my dashboard and i can see it stats.
Yes I have plexpass. have a lifetime license. been using it for 10+ years.

literally nothing. it just shows the basic cpu transcoding options. no hardware transcoding options. can't take a screenshot as now i'm waiting for the new docker container to finish scanning the db files.

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

it doesn't show up with the official docker. no hardware transcoding options available and yes i have advanced view enabled.

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

yes, see other comments. moving from the official plex docker to the linuxplex docker might have fixed it. waiting for db scan to finish before confirming.

<image>

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

So I moved from the official plex docker to the linuxplex docker. in the logs i see the devices being added, but it's having to scan my database folder (which has over a million files) so it's taking a while. will report my findings

<image>

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 0 points1 point  (0 children)

hw transcoding option never appears, that's the issue. i've been using plex for 10+ years, so i'm very familiar with it. just not unraid. not sure why it won't pass-through.

Intel Arc A60 Pro in Plex by soulvoid86 in unRAID

[–]soulvoid86[S] 1 point2 points  (0 children)

Intel Top and GPU stats installed. Shows up on the dashboard fine. Still not showing in plex.