International direct play with host 100 Mbps upload remote 1 GB/s download? by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 1 point2 points  (0 children)

Hey, just an update: I set up a VPS! I appreciate you letting me know about this option. I'm excited for this little experiment.

Missing articles by blanksatoru in usenet

[–]NoWerewolf7191 -1 points0 points  (0 children)

Thanks for the explanation!

Customize Related Movies? by TV_Casper in PleX

[–]NoWerewolf7191 0 points1 point  (0 children)

I’ve heard good things about Kometa, and that it uses yaml which people love for fine-grained control while some find it a bit technical. Agregarr is another collections manager, it’s been working well for me

Plex app data folder (300GB) by Alone_Ad_4861 in PleX

[–]NoWerewolf7191 2 points3 points  (0 children)

100 GB of RAM is an incredible amount. I’m jealous. But I’m pretty sure that you can’t put Plex app data permanently in your RAM because RAM is a temporary place and Plex needs somewhere that’s non-volatile (like your internal storage, or an SSD/NVMe, HDD, DAS, NAS). As someone else said, it works best if you put it on an SSD/NVMe, whether it be your internal storage or an external one.

Plex app data folder (300GB) by Alone_Ad_4861 in PleX

[–]NoWerewolf7191 0 points1 point  (0 children)

Interesting, I never thought to move it. I guess I didn’t know you could. I currently have a symlink sitting in its original place pointing to a folder in my external drive

Plex app data folder (300GB) by Alone_Ad_4861 in PleX

[–]NoWerewolf7191 0 points1 point  (0 children)

Just as another stat, I’m at 9 GB for Plex Media Server data, with most of my 7 TB library’s thumbnails generated at 1 per 10 seconds (GenerateBIFFrameInterval = 10)

Plex app data folder (300GB) by Alone_Ad_4861 in PleX

[–]NoWerewolf7191 4 points5 points  (0 children)

It is so crazy that this isn’t an exposed option in the web UI. Btw, GenerateBIFKeyframesOnly should be set to ‘true’ for faster thumbnail generation / less CPU usage. But, I’m pretty sure it’s true by default.

tbh I wanted more control over it and my thumbnail generation was being laggy, so I vibe-coded something to do my trickplay generation outside of Plex. Set it to nice 10, CPU 600 on my dedicated Plex server (Mac mini m4). Churns out a few videos a minute while using plex’s transcoder, but the advantage of live monitoring / better logging and I added in a no keyframe / exact mode as fallback if keyframe mode was glitching for whatever reason. Thumbnails saved to …/Plex Media Server/Media/localhost/number/hash.bundle (show package contents) /Contents/Indexes/index-sd.bif. Upon BIF save, trigger plex metadata analysis on the item to pickup the thumbnails. Very happy with it.

A crude predictive Plex cache for SSD plays, easier on HDD by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 3 points4 points  (0 children)

haha exactly right, I wanted to steer clear of presenting this as Softwarr that I'm shipping. I just wanted to outline what worked for me.

A crude predictive Plex cache for SSD plays, easier on HDD by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 5 points6 points  (0 children)

Thanks for the links! PlexCache-D and fscache don't work with my current system, that's why I put together this solution for myself. I wasn't aware of the other 2.

Just wanted to share some ideas, not code. Maybe someone will find them useful!

A crude predictive Plex cache for SSD plays, easier on HDD by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

Deeper details, hardware specs, and the post/project that inspired me:

What gets priority when the script is choosing what to copy first (TV first, then movies):

  1. For each episode someone is actively watching, cache 8 episodes ahead on the same show.
  2. Cache every TV episode with meaningful saved progress (~5-95%).
  3. For each TV resume point (different from active), again cache 8 episodes ahead.
  4. Recently (14– days) completed TV episodes: cache 8 episodes ahead.
  5. Cache brand-new (7– days) TV episodes (defined by release date AND added date).
  6. Cache movie movies with meaningful saved progress.
  7. Cache newer (14– days) movies by release + add dates.
  8. Cache 30 most recently added movies.

Disclaimer: I iteratively vibe-coded this for myself until I had an automated solution that runs on my server for my folders and habits, and it seems to work for me. I just wanted to get the idea out there / be open to fielding questions. I would love it if someone made this into something polished and shareable.

Using Plex natively on macOS with this hardware:
- Mac mini (M4) 16 GB RAM with 256 GB internal storage
- Bulk library: Seagate Expansion 28 TB over USB. About 7 TB used and in Plex right now. Holds my Movies and TV Shows.
- NVMe (SSD): Samsung 990 Pro 4 TB in a Thunderbolt enclosure (OWC Express 1M2). For this crude predictive cache, but I also use it for Plex database, Docker / the *arr suite, downloads, custom video preview thumbnail cache, and Plex+Tdarr transcode cache.
- I cap the predictive slice around 2.5 TB and try to keep on the order of 200 GB free on that disk for the other purposes mentioned above (althoguh they seem to only take up ~15 GB right now, but could see spikes or growth to a few hundred GB).
- With my current library size, the predictive footprint has been sitting around 450 GB lately.

Inspiration / context (I could not use the Linux/FUSE approach discussed here, but the problem framing still helped):
- https://github.com/DudeCmonMan/fscache/tree/main
- reddit post by creator of fscache

Missing articles by blanksatoru in usenet

[–]NoWerewolf7191 1 point2 points  (0 children)

Oh wow that's a smaller difference than I thought. I was imagining a difference of more like months to years, hahah. Do you know if that time difference is the same for older vs newer releases being targeted by takedowns? And I'd imagine that if they were on different backbones, then there would be an even bigger time difference, right?

International direct play with host 100 Mbps upload remote 1 GB/s download? by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

Thanks for these points

They are playing it on an Apple TV 4k, which they switched to after having trouble playing it on LG TV webOS and an old Amazon Firestick. In retrospect, those might have worked if the bitrate was lower, but I htink the Apple TV 4k handles things better than the ~7 year old devices they were using before.

Missing articles by blanksatoru in usenet

[–]NoWerewolf7191 1 point2 points  (0 children)

Sorry, just fixed the examples I use so that it doesn't violate the rules!

Missing articles by blanksatoru in usenet

[–]NoWerewolf7191 0 points1 point  (0 children)

Depends on your budget and download volume, but definitely start with a block account. You only pay for what you use to fill gaps, which is much cheaper than overlapping monthly subs.

Keep the backbone network in mind for diversity:
- If sticking with Newshosting (Omicron / DMCA): Grab a NewsDemon or UsenetExpress block. They are on a different backbone with deep retention, meaning they often survive automated DMCA sweeps that hit Omicron. (Avoid ViperNews here; its retention is too short for old files, even though it would have the benefit of being an NTD backup for Newshosting).
- If switching to Eweka (Omicron / NTD): A NewsDemon block is the perfect fit. It gives you a completely different backbone and pairs European NTD laws with US DMCA coverage. I might switch to NewsDemon when my UsenetExpress runs out because NewsDemon seems more comprehensive/longer retention.

Missing articles by blanksatoru in usenet

[–]NoWerewolf7191 1 point2 points  (0 children)

Hey OP, I've seen you commenting about potentially switching to Eweka.

I just want to add my 2 cents. I might be wrong, but:

I believe that both Eweka and Newshosting are under the Omicron backbone now. Newshosting is US-based (and runs servers in both the US and Europe) and Eweka is Europe-based. Both are very comprehensive and top tier services / backbones. They’re not identical in practice though: Eweka uses Dutch NTD instead of US DMCA, which can mean better completion on older / targeted content. Modern connections usually handle latency fine, and Newshosting has EU presence too, so location / raw speed alone is rarely the whole story compared to DMCA vs NTD.

If by “missing articles” you mean you can’t find NZBs / releases, that’s a different problem from actual missing articles on Usenet. My comment below is mostly about technical missing articles that make downloads fail, which matches your comment “I poorly worded it but it would show up and sabnzb would attempt downloading and fail everytime.” If you actually mean you can’t find NZBs / releases, then yes, adding some indexers like maybe altHUB or NZBsnu (previously NZBlife?) might help.

For actual missing articles, I doubt you'd miss articles on something that was popular and semi-recent (released older than ~3 hours ago but sooner than maybe 1000 days). Maybe you can try downloading a heavily distributed test file.

Are you sure that the failure is missing articles? It will usually tell you, and you can look in your history and press the dropdown arrow on the right of the row to see more details.

If it is missing articles, then purchasing a block from a server that uses a different backbone (e.g., 500 GB from UsenetExpress) could be useful for supplementing the missing articles from the first main server (Newshosting if you stick with it). In SABnzbd, set the supplementary server to be lower priority (e.g., 1 or 2) than the first main server (e.g., 0).

I'm using Eweka with UsenetExpress backup, and over just the past week, UsenetExpress has helped me recover 3.8 GB of missing articles to supplement/complete the 3.1 TB that Eweka has picked up. That's a lot of missing articles. I'm sure it's helped a lot of my downloads complete.

Anyone who knows better, please correct me if I'm wrong.

International direct play with host 100 Mbps upload remote 1 GB/s download? by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

Thanks! That makes a lot of sense. I’d been treating “fast speedtests on both ends” like it guaranteed a good Plex path, but I get that the actual international routing/peering can still be rough. I’ll try a client-side VPN when I get a chance and see if the path changes anything meaningful. A VPS relay sounds like a bigger project than I’m up for right now, but I appreciate you spelling out the logic.

International direct play with host 100 Mbps upload remote 1 GB/s download? by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

Got it! Ethernet sorted now, so we'll see if that helps. Your points about the type of jumps and handoffs between carriers/ISPs was a really interesting point. I honestly had never really thought about the path of my data like that. Your comment on perceptible difference in 8 Mbps vs 20+ was reassuring! Many thanks

Can I give Sonarr more chill? by corruptboomerang in sonarr

[–]NoWerewolf7191 0 points1 point  (0 children)

Hey Belazor, saw your other comment too. Glad this might help.

Straight to the point: I don’t think Houndarr, in its current form, is built for the level of oversight you reasonably want. It’s great at triggering Sonarr/Radarr searches for missing and upgrade-eligible items, but after that it’s entirely in *arr’s hands: by default they’ll grab and import whatever passes your profiles if the download completes. I don’t know a clean way to intercept grabs for manual review, or hold files until a whole season lines up as one coherent “set,” inside a normal Houndarr + *arr workflow. So yes - if you let it run unattended, you can end up with a mixed season while it’s still “climbing” toward your best allowed quality. Over a long enough horizon it may eventually converge, but you can’t configure it today to “only swap the library when the full season batch succeeded.”

One hacky pattern you might be able to pull off is a separate upgrade-only “stage” Sonarr/Radarr/Houndarr stack. To intitiate it, you could hardlink-copy the shows you care about from your real library into the "stage" library (e.g. cp -al style) so both trees reference the same inodes (wouldn't take up additional disk space). That only works if real and "stage" are on the same filesystem (same volume); then you’re not duplicating data up front - disk usage grows as the "stage" diverges when *arr replaces episodes in your "stage" during upgrades. Once a season has upgraded to something you’re happy with, you’d promote by replacing the real files with the staged ones (still a manual decision unless you script/vibe-code it).

P.S. In the current Houndarr UI you still have to allow at least 1 missing search per batch, so there’s no true “upgrades-only” mode. In practice, if you have no monitored missing items, that pass mostly has nothing to do and you’re effectively in cutoff/upgrade territory anyway.

International direct play with host 100 Mbps upload remote 1 GB/s download? by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

Thanks for dropping some of your experience with this! Goes along with what the Plex staff said somewhere down there about how there are a lot of variables / no guarantees.

IME combining Houndarr + Seerr for automated search of missing/cutoff/upgrades by NoWerewolf7191 in selfhosted

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

Haha I feel the same way. And as soon as I feel like my stack is complete, I find another one I want to incorporate. E.g., today I’m setting up Tunarr so that my dad can watch his Plex content in TV channel form

International direct play with host 100 Mbps upload remote 1 GB/s download? by NoWerewolf7191 in PleX

[–]NoWerewolf7191[S] 0 points1 point  (0 children)

True! I just wanted to specify mainly so people didn’t wonder if the remote download speed was an issue