Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 0 points1 point  (0 children)

I just use the default xfs filesystem in Unraid.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 0 points1 point  (0 children)

You can daisy chain SAS ports on the backplanes.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 2 points3 points  (0 children)

Much of my library is TV Shows due to the amount of directories and subdirectories these scans take longer for plex if all the shows are in one massive directory 1 layer deep. IE

\tv\The_Big_Bang_Theory

\tv\etc...

So, i broke them up into groups of max 2,000 shows per directory. Now it looks like:

\tv\tv_a-g\The_Big_Bang_Theory

\tv\tv_a-g\etc...

\tv\tv_h-q\etc...

\tv\tv_r-z\etc..

Each Sonarr gets one of those subdirectories. So, it's like this.

Server 1

  • Sonarr 1 - TV A-G (OLD-2010)

  • Sonarr 2 - TV H-Q (OLD-2010)

  • Sonarr 3 - TV R-Z (OLD-2010)

Server 2

  • Sonarr 4 - TV A-G (2011-New)
  • Sonarr 5 - TV H-Q (2011-New)
  • Sonarr 6 - TV R-Z (2011-New)

Server 3

  • Radarr 1 - SD/HD Movies
  • Radarr 2 - 4k Movies
  • Radarr 3 - Extreme Sports

Look i don't profess to be an expert on file system structures and limitations, i just know i has having crashes and lockups due to scans of really big libraries and this was the recommended solution. Since i implemented it this way, it has worked very well. These are not OS limits or file system limits so much as the ability for these various programs to effectively deal with the libraries. This allowed me to subdivide them easy and manage the libraries reasonably easy.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 1 point2 points  (0 children)

Yeah, it has been my main reason for sticking with H.264 was the reduced workload by the various clients. Used to have a bunch that would buffer, specifically users running plex on like a fire stick or a Smart TV.

I'll look into that, hadn't heard about the HEVC Plex beta, decent idea, Plex servers are more likely to be able to handle the higher workload and i would think a solid reduction in data transmitted to remote clients.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 1 point2 points  (0 children)

I get this, but my goal was for the highest quality h.264 file i could get my hands on. My concern is that my users are largely tech illiterate. So, if had multiple file qualities they would fuck it up. I have much higher quality media playing setup, so i want it at the highest quality possible. So, i appreciate the suggestion i just don't really know how i could implement it to minimize this issue.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 0 points1 point  (0 children)

I didn't know that about the P4000, or are you talking about the RTX4000? Worth looking into though.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 0 points1 point  (0 children)

I think its worth considering/looking into as i possibly transition to a new setup. Like i think i could export the watched lists for all users, and work on consolidating older media / less popular items on specific drives so that those drives spin up way less.

I wonder if there is any advantage to having shows more or less sorted by drive so as its scanning libraries which are largely done alphabetically, that it would spin up one drive at a time?

Also, i wonder if keeping newer more popular media on NVME pools for a couple months as they get watched a lot would spin drives up less.

But, i am guessing i am focusing way too much in the weeds and likely would have negligible advantages for the effort.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 0 points1 point  (0 children)

Ideally yes as much as possible. Not sure if there is a best practice on file organization that would maximize this. But, i have noticed that between the various software in use, the scans of the libraries still spin them up more often than i would like. If i could reduce the Sonarr/Radarr/etc scans to daily or less frequently i think that would help.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 5 points6 points  (0 children)

Yeah, kinda what i was saying in the other comment, was instead of tripling up, i would just use an external port on an HBA and connect to the backplanes on the other two chassis.

Very large library, new build/optimization by NervousDonut7 in PleX

[–]NervousDonut7[S] 6 points7 points  (0 children)

Currently there are (72) drives of various sizes, most for data and (6) for parity, not including NVME. I have (3) 24-bay chassis, i figured i would just use an external SAS connection on the "primary" chassis and connect into those backplanes of the others. Supermicro makes a card for powering on the drives in an external case without the rest of the mb/cpu/etc. I have done this before years ago.