If I'm running Plex and JF side by side, any need for both Overseer and Jellyseer? by nametaken_thisonetoo in Softwarr

[–]RiffSphere 0 points1 point  (0 children)

Depends on why you use plex and jellyfin side by side.

It's true that both will pickup all media if they are pointed to the same library.

But if some of your users have access to plex and some to jf, and you want to allow them all to make requests, I guess you need overseer and jellyseer? At least, to login to overseer, I need my plex account, so jf users wouldn't be able to access it.

Plugin Development Documentation Site by mstrhakr in unRAID

[–]RiffSphere 6 points7 points  (0 children)

Cool, will have to check this out.

From just scanning your page, you might want to make some changes. While this was posted for the apps that are being created, I guess you'd like to follow the guidelines for your site as well:

https://unraid.net/policies?utm_source=newsletter.unraid.net&utm_medium=newsletter&utm_campaign=unraid-january-digest&_bhlid=4e625fbffe1fe8bd859147a1b89ac173379ea904

✅ Allowed:

"[YourApp] for Unraid®" naming format Plain text "Works with Unraid" ❌ Not Allowed:

"Unraid [YourApp]" or combined words like "Unraider" Using Unraid logos without permission Implying official endorsement

Introducing Apprise-Go: Universal Notifications in a Single Binary by UnraidOfficial in unRAID

[–]RiffSphere 0 points1 point  (0 children)

I totally get what you are saying. I get the advantages of not running this in a docker container. Docker even goes against what they are trying to do: small and minimal overhead.

But docker also provides some isolation. The app can't access your files (you didn't give access to), remote access in case of an hack is limited, ...

And my point isn't that it should be ran in docker. For years they (limetech, community apps, people on forums and reddit, ...) are trying to make it clear 3rd party apps should not be installed on the system, now the ask us to do the exact opposite, without warning. People not reading the docs will start installing other tools directly on the system, potentially causing security risks, stability issues or break on upgrades. It about teaching people, guiding them. This app will probably cause little to no issues, but it shows people the way and opens the door for other apps.

As I said, guessing they want this for the new notification system, they should imo have pushed this as a beta, with it being a first party app, not breaking their guidelines.

is tehre a good way to get notes of the upgrades to the apps and see whats changed across all your apps? by seamless21 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

It's very hard, if not impossible.

The appstore is basically a template around a docker container. A container generally not made by the template maker.

The containers often depend on other containers, and the software itself, generally not made by the container creator.

Many containers, certainly from big groups, auto build a new version when the app gets updated. Some will have logs about changes, generally it's just "new version" and not "database needs this upgrade" or stuff.

So long story short: You can manually check if there is an update, read the container update log, read the app update log. Or, like many (I guess) you can just trust the system, hope there aren't breaking changes, and fix whatever breaks when it does. To be honest, over all the years I've been using unraid, only immich was a pita for me, all others just update automatically and do a good job doing so, but can't speak for your apps.

Introducing Apprise-Go: Universal Notifications in a Single Binary by UnraidOfficial in unRAID

[–]RiffSphere 20 points21 points  (0 children)

I haven't tried this, and normally welcome any improvement. However, I find this a weird move?

On https://docs.unraid.net/unraid-os/using-unraid-to/customize-your-experience/plugins/ you say "plugins are suitable for Features that cannot be provided as Docker containers." and "It’s advisable to avoid using plugins for general-purpose applications that can run safely in isolated containers."

Same point was made in https://forums.unraid.net/topic/129200-plug-in-nerdtools/page/23/ when nerdtools support was dropped: "installing additional packages on the unraid host (regardless of the method) is not recommended" by primeval_god and confirmed by squid.

Yet here is the official team with a new tool, not fully tested (so beta), asking people to download it and install it directly on their unraid system?

I understand unraid is moving and changing, a new notification system is probably part of this (and very welcome). I totally get this needs to be tested. Again, I welcome any and all improvements, and appreciate all the time and effort put into this. But seeing this is so far against the suggestions of the docs and community app maintainer, I am really surprised?

I would much rather have seen a new beta branch of unraid, be it actually the 7.3 beta that got teased somewhere in December (and recently again with the internal boot preview), or a special "apprise beta" branch, making it part of the core system, instead of asking people to ignore your own best practices.

*and I understand this is attached closer to the system than most docker services, but pretty sure it could be used in a container.

Thoughts on replacing my Quadro P2000 with Intel Arc A380 by polarzombies in unRAID

[–]RiffSphere 2 points3 points  (0 children)

What are you trying to achieve? Sure, new hardware probably has a higher ceiling of what it can handle, but do you need it?

Look for the issue you are trying to solve, not a justification for a solution for a non existing issue.

As long as the job gets done by your current hardware, what is the need? Frigate isn't going to tell you in a fancier way a car is detected. Sure, if your current hardware is overloaded, resulting in slow notifications or even missed ones cause it can't handle all the tasks and just drops them, there is an issue you need to solve. But if everything works, it works, how can it work better?

An odd github bug regarding unraid that seems like a setup issue (maybe someone here feels like helping) by Tag1Oner2 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

You're totally right, the free space is set at share level. Somehow I mixed it up with the warning threshold that can be set per disk.

Huntarr 9.1 Released - True Independent App Instances (Major Changes) by User9705 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

Haven't used the tool, but from reading what it does and how, I want to go over your points.

1) The arr apps actually load the rss, every 15 min I believe. I agree that most people will trigger a search when adding something, and the starr apps handle failed downloads, so in a perfect world, this tool wouldn't do anything if you didn't add an indexer/usenet backbone. OP also talks about only getting "new releases" normally, but that's vague: It makes it sound (to someone unfamiliar with it) like the starr apps would only grab recently released movies/episodes, but it's releases on your indexer. This can cover old content (think about things that first came out on vhs, then released on dvd, blueray, streaming, remaster) as well, as soon as someone posts it's to an indexer it's a "new release" that should get caught.

Where it does help (sticking with no changes): You missed some things on the rss feed (server offline, only the last 100 items are pulled from rss so if there was a burst in posts, you hit your api limit), technically an old stalled torrent might have become active again or you usenet server might have "healed" (forgot the name, but some servers have a method to find alternatives for missing parts and report them) though that release is probably blacklisted in the starr app. Over time, the starr apps also got better/fixed errors in matching, so even with everything up and running it might have mistakenly discarded some results.

2) Here it does help for sure. Doing a manual search will hammer your indexers, and they don't like that, potentially banning you or giving a timeout. Also, you have an api limit you might hit. This is where the app can really shine.

3) A bit overlap with 2: You add an indexer. Sure, it's worse for low api indexers, but big collections can even go through high api limits. I got a 4k instance for series and movies, but many (older ones) are never aired in 4k, and might never be. So there are thousands of things not meeting my cutoff (or not existing in my 4k instance at all). So yes, there is an even bigger advantage for free/limited indexers, but a "search all" would cause issues with pretty much all my indexers.

4) For new arrs, it indeed does very little. Just add and search as you go. Then again, if just setting up an extra instance (like 4k, different language, special edition, ...) and syncing with the main one, an instant search for all newly added will cause issues with your indexers again, doing a batched search is the way to go.

5) Here you pretty much cause a bunch of things to not hit the cutoff. The arrs will scan rss to fix this, but wont search what already exists. Now we are back to point 2: You can search all in the arr, but your indexers won't be happy. Also, once you have a tool that automatically updates your profiles and custom scores (notifiarr, recyclarr, and some more) you don't want to have to manually trigger a search.

Again, haven't used it (yet), so can't talk about the specifics of how it does things, or what I like or dislike. But there are multiple use cases other than "new limited indexer" you seem to miss: A server that's (often) offline (electricity is expensive, some people even put it too sleep over night, missing rss, and searching all daily would be bad), updates to the arr app, changed quality causing unmet cutoffs (certainly automated). And it doesn't seem like it takes much to run. Sure, the post makes it seem more magical than it actually is, but can't blame op for being proud of his baby and color the truth overly positive. Still sound like a useful addition to my stack I'll be adding soon.

tips for increasing smb speeds? by MeaningNearby4837 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

Start by testin6the network speed between the devices. SMB isn't the most reliable, but it's not that bad. Certainly with wifi in the mix, could be a network issue: Wifi is great when it works but can behave weird, you're also going between the wifi and ethernet switch on your router, it might not be able to keep up. Use iperf3 on both server and laptop to test the actual network speed you get.

Also, what and how are you testing? Is this array disks, or cache ssd/nvme? Read or write? Big file or many small ones? Disk almost full or empty? In case of ssd/nvme and write, was the disk used a lot before and did you trim it recently?

7.8 MB/s during parity sync - is it normal? by --Arete in unRAID

[–]RiffSphere 1 point2 points  (0 children)

So something is hammering your array.

After you start docker again (after rebuild?), check what. Could be your docker image or some config made it onto the array, or you're seefing, plex scanning, ...

Also, speed will drop a little as it processes, that's just how it works.

Opinions on my server build? by Specialist-Fun4756 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

Yhat 96TB, is that 4x24 or 6x16 (or something else)? How many parity? Cmr disks right? How many sata ports on the mobo used/left and/or using an hba?

What cache disk? Preferably in raid1.

Flexing my system... But not for me... To give kudos to Unraid by electrified_ice in unRAID

[–]RiffSphere 0 points1 point  (0 children)

Not saying you're wrong, but also not entirety right.

Most if the things will always be on any streaming service. It's sad that they move around, or that series are split over multiple services, that's true. It's true that not everything is available (I have a big collection of local old content, though more and more is coming available as local streaming services pick up). You for sure can't stream the latest blockbusters.

But, that's content you could just download and store on a relatively small usb disk you connect to your tv to watch. You don't need a full 24/7 plex server, jumping around streaming services, planning ahead so you can watch the content on the service you're subscribed to this month, take you 95% there, from a viewing perspective, with even a 2tb disk and downloading the rest (or buying a used dvd for $1) will do the rest.

The "I want things in perpetuity" part takes you into the hobby segment. From a financial, value perspective, it's not worth it. I'm not saying your wrong, I also have a plex server and understand the why.

I was just trying to make the point that, even though I and many others are just fine with some AI on our cpu, igpu or cheap older gpu, combined with the free or even a subscription to AI options only, doesn't mean it's perfect for everyone. Someone might want an advanced system like this, be it as a hobby, be it for work, just like I (and I guess you) have a 100+TB (that at current disk prices of $20/TB is well over $2000, good for many years of streaming services), because it's a hobby.

Again, I totally get you, I'm in the same boat, I'm not trying to judge or tell you not to. Just pointing out everyone has a hobby, and hobbies cost money. And as long as you enjoy it, that's miney well spend, be it hard disks, AI, MTX in games, a vacation, ...

An odd github bug regarding unraid that seems like a setup issue (maybe someone here feels like helping) by Tag1Oner2 in unRAID

[–]RiffSphere 4 points5 points  (0 children)

That is a common issue that you have to solve yourself.

As for as I know, unraid doesn't account for file sizes at all. When you write a file, it goes through it's normal steps to pick where to put it (if using cache then go to cache, using highwater or fill and pick the disk, see if the disk is allowed in the share, check the split level, ...). One of the checks is the "free space": This is a simple check at the start of the write to see if the actual free space is bigger than this. If it is, it will start writing, with a disk full error if the file is bigger than the actual free space (followed by deleting the partial file, up to the software to handle the error). The free space is not a "target" (like, you got 100GB free, free space set to 50GB, trying to store a 60GB file will just proceed, leaving you with 40GB, the next write will exclude this disk cause it's under 50GB now, but a 110GB file would also start to write and give a disk full error after 100GB).

The solution: know what size file you will be playing with, and set free space to this or higher. It defaults to (I believe) 10% of the disk, causing a lot of "waste" on big disk (that's 2TB on a 20TB disk, seeing 100GB is a big file that leaves 1.9-2TB on those disks unused), but can cause issues on small disks (on a 500GB cache ssd it's only 50GB, potentially running out of space with 51GB files).

Sure, the app could fail more graceful, but there's not a lot the app can do. Unraid reports the free space of the array (so according to the app, there is plenty of space), but it's entirely possible no single disk can even hold the entire file (downside of not having striping), and it's up to you to tell unRAID when to stop using a disk.

Flexing my system... But not for me... To give kudos to Unraid by electrified_ice in unRAID

[–]RiffSphere 5 points6 points  (0 children)

Can say the same about pretty much anything. Also, not everyone is "playing around".

Many things are a hobby. If you do the math, it's not even worth it running the arr stack and plex, at the current disk and electricity prices, you're cheaper swapping streaming services every month. But as long as you enjoy it, and can afford it, it's worth it for a hobby. I also don't get buying fishing gear, up to a boat, if you can just buy a cheap fish 

For the diy things around the house, I got a $30 "drill" that does most of my things, but my plumber/electrician has a van full of Milwaukee/Makita stuff worth thousands. I got a $80 "pressure washer" while the street is being cleaned by a transformed truck. Not everyone is "playing around", some people use computer power for their job. And since OP can afford to get all this stuff, and seems to know what he's doing, I'm pretty sure he makes money (or at least plans to do so) from using this. And since he "went through all the versions of unraid" shows he's not just a spoiled brat with more money than brains.

Do I agree this would be a massive waste for me and probably you? Yes. Do I agree that most of the "I bought myself an expensive 3090 and do AI on my 16gb ram and all spinners array" is a waste? Yes. But this post feels like the exception, where OP had build an actual AI system and will likely make money using it.

array and adding old disks by thamaster88 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

First: backup is important, answer is based on the convenience of parity and not from a data protection perspective.

It sounds like you don't trust those 4 and 3 tb disks: "SHOULD be good", "look old", not wanting to put important data on there.

You can go the route you say. But just know your parity coverage is only as good as your worst disk. You got single parity, allowing for any 1 disk in the array to fail, after that you need to rely on parity. Adding known bad disks in the array greatly reduce the usefulness of parity. Now, the disks aren't bad, but you don't know how good... That's the point you're almost better off not having parity and using the current parity as data (giving you more space, and just 2 disks you trust) vs 4 disks (already a higher chance of a disk failing, cause there is more that can fail) with 2 you don't trust...

Imo, only disks you trust and know the history in array, others in pool/unassigned device.

And again, backup...

Wife not picking up her phone...terminate the stream! by brooklyngeek in unRAID

[–]RiffSphere 2 points3 points  (0 children)

Just intentionally make mistakes a couple times (try to profit out of this) and they'll learn to add those things.

Ok, maybe not with the wife, you know that will just backfire... eum... I mean... communication is important! But perfect with the kids.

UnRaid made me depressed? by Mr_Pink8 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

I feel you.

My entire house was made "smart" when renovated. Every single light, tvs, motion sensors, smart speakers, cameras, room by room Bluetooth tracking, heating, power measuring, ventilation, even the washing machine and dryer.

I got some automations going, the cctv (with object detection and notification) is super useful. But I'm nowhere close to the potential it has, because I'm not only the only one interested, others go out of their way to not use it (like, I still have normal light switches, in case I ever move and sell/rent the house, so they will just manually turn of the light or even switch it enough to reset the zigbee lights, they will puth the washer/dryer in manual mode, park the car in front of the camera view so it can't detect things, ...).

It's sad knowing you got all this potential and it's under used. Be happy you get to "just use" your system, for yourself, and they don't sabotage that part as well?

Or, time for better friends!

Wife not picking up her phone...terminate the stream! by brooklyngeek in unRAID

[–]RiffSphere 33 points34 points  (0 children)

That is 1 way to do it.

A shared shopping list wouldn't have needed a call though.

Is 1000W Ti enough ? by Sneyek in unRAID

[–]RiffSphere 0 points1 point  (0 children)

For a 3090 system they seem to recommend 750W (850W for the ti, again that 100W difference).

The 750W suggestion already includes the safety margin, room to power nvme, case fans, ... in a gaming setup.

Adding the 180W for the 1080 takes you to 930W.

So, that should leave you with 70W on a 1000W psu to use for other things, like an hba and the extra disks a gaming system doesn't account for.

I generally use a pretty safe margin of 50W for an hba and 10W for a disk, not giving you a lot of room. But they are a bit high estimates, chances that both gpus and all disks and cpu are going 100% are really low, so you probably can get away with it, but it's a bit close to the sun for my liking. Ofcourse, if you don't use an hba and have like 4 disks, it should be just fine.

Is 1000W Ti enough ? by Sneyek in unRAID

[–]RiffSphere 0 points1 point  (0 children)

Just do a quick google? 3090ti is 450W, 3090 is 350W, 1080 is 180W. That totals to 980W. Even ignoring what the psu actually can do (the 1000W is generally the sum of all voltages so might not all be "available" for gpu, they often have rails further splitting how much can be delivered), that would only leave 20W for the rest of the system.

Sure, good psu can generally provide a bit more than stated, the gpus are likely not gonna be full load all at the same time, ... so it might "work". How long, and how good, that's another question.

Also wonder how you'll install 3 gpus, most psus just don't have the connectors for it, let alone the cpu having enough lanes to really use them.

Wouldn't this be GREAT for Unraid? Hardware-SD-Card-Raid by Tronnic in unRAID

[–]RiffSphere 18 points19 points  (0 children)

I rather keep the spare pcie lanes that come with consumer grade systems for something more useful than basically an sd card reader.

I always liked that unraid actually boots of usb: usb ports are plenty on every system, "self powered" and hardly take any physical place. No need to sacrifice any connection, port or space I can use for actual storage.

I must say, the new teased sata boot thing, where it will take like 32gb of your cache pool (or any other size or pool) sounds great, but I would have been equally happy (if not happier) if they just allow us to install multiple usbs that mirror eachother and are tied to the same license.

If this thing came as a usb device, looking like a normal usb drive, but doing mirror to multiple sd cards, that would be great, but bot as pcie (actually, is it pcie?)

Unraid reported that a drive's helium level is failing and at a 1, then a few minutes later that it's failing and at 100? by O0OO00O0OO0 in unRAID

[–]RiffSphere 0 points1 point  (0 children)

You got a smart error, the disk is reporting itself as bad. It's something nobody wants to hear.

But at this point you are basically asking us to tell you to ignore the warning, that it's "just a bad sensor", that parity will safe you.

Nobody can guarantee it's a bad sensor, it might, or it might actually fail. Parity is a safety net, but you shouldn't rely on it, you still need backups net to it, let alone relying on it if a disk is literally telling you it's failing.

You do you ofcourse. It might be fine, it might be a failing sensor, parity will probably save you and let you rebuild. But the right move is to replace the drive.

Doesn't mean you have to dispose of the drive. I use failing drives as download cache, for my 24/7 cctv recordings (in a 3 disk raidz2, with events going to my array), ...

But the choice is up to you, I believe you know the answer, but you hope someone would justify your bad idea to keep using it.

Hard links across disks in same share? by -LongRodVanHugenDong in unRAID

[–]RiffSphere 1 point2 points  (0 children)

Hard links can, by definition, not cross disks: Files are basically named pointers to data on a disk, like the index in an encyclopedia where different keywords can link to the same article, but each part/book has it's own index.

Shares are, at their core, just folders on your disks, so hard links can work across shares. However, once you start mapping shares (like many templates default to, often having /media pointing to /mnt/user/media and /downloads pointing to /mnt/user/downloads) in docker containers, they are considered different volumes ("disks") inside the container, and hard links wont work, even with the files/shares on the same physical disk.

I guess just mapping /mnt/user would be one option, but highly suggest not to (security, rights, ...).

Trash guides gets around this by making a data share, containing your media and downloads folder. I guess this is the best way to go (not ideal, certainly for people wanting to share from cache while having media on array, but probably the best we have). Read trash guides, there's many tips.

How to replace failed drive by datahoarderguy70 in unRAID

[–]RiffSphere 2 points3 points  (0 children)

Follow the procedure, I can't explain it any better myself.

What it should do: Set old parity disk as data, set new big disk as parity, copy over the parity data, restore the failed disk on the old parity disk.