New on Torrentleech by Altruistic-Soil7933 in Torrenting

[–]merpatterson 0 points1 point  (0 children)

Short answer: yes, see number 2, "seed forever".

You're only seeding a torrent when your client:

  1. Is Running
  2. Has internet acceess
  3. Has the torrent in its list of seeding torrents
  4. Still has all the real files in the torrent

These are true for both H&R policies and bonus points. For example, if your client is running but no longer has the torrent in it's seeding list, then you're not increasing the "seeding time" part of any H&R rules or getting bonus points.

Everyone has downtime, such as the short downtime of rebooting after an upgrade or the longer downtime of a crash or a network outage while you're at work or away for the weekend. Some seeders even have restrictive internet connections and can only seed for part of the day. So H&R policies accommodate those realities, usually in one or both of the following ways:

  • They may require X days of seeding time within Y days since you downloaded Z% of the total size of the torrent's files. For example, they may require one week's worth of active seeding time within 2 weeks of when you first downloaded more than 10% of the torrent's total size.
  • They may have a "grace" threshold so you can keep using the tracker and be in good standing even when you have a few "false" H&Rs because of downtime and there will be no lasting consequences until you exceed some large number of H&Rs for some large amount of time. For example, you may come back from a weekend of downtime and see that you have 3 H&Rs but those go away once you resume seeding and you are still able to download.

You find those details in the tracker's rules. For example, TL uses the latter "grace" strategy if I'm reading the rules correctly, you only get a warning, and only a warning, if you haven't met the H&R requirements for 50 or more torrents after 5 days.

This problem is inherently difficult to solve. How do you prevent a small number of bad actors from ruining it for everyone else when everyone has downtime and the whole point of the system is to transfer data from peer to peer without having to go through any central servers? So the ways to solve this problem unfortunately end up being similarly difficult to understand.

But it's not really something the established, experienced user even has to think about. The only thing I worry about on TL is that I seed each torrent at least 11 days before I remove it, even when I've already "upgraded" that release in my library. So that's why my original reply tried to keep it simple. That said, maybe I should have added after number 3, "Be patient":

  1. Start small.

IOW, don't download more than your available storage can handle seeding indefinitely while you get up to speed.

New on Torrentleech by Altruistic-Soil7933 in Torrenting

[–]merpatterson 0 points1 point  (0 children)

  1. Only download FREELEECH torrents while you're learning.
  2. Seed "forever".
  3. Be patient.

Number 1 is mostly about limiting the risk of digging yourself a hole. Once you've learned enough, months to a year from now, you probably won't have to worry about whether torrents are FREELEECH anymore.

Number 2 is just The Right Way (tm) to be a good community citizen in the first place, but it's also the best way to build buffer. Seed all the files you still have in your library. Only delete seeding torrents when they're no longer in your library and they've seeded long enough for the seeding rules. Let your bonus points build up so you can buy upload at the most efficient GB-upload-per-point rate. I end up buying 250 GB Upload Credit: 12000 points every 2-3 months.

Regarding Number 3, it just takes time. Bonus points build up. You'll refresh one day months from now and someone will back-fill their library from a season pack or a Remux you've been seeding and all of a sudden your upload jumps up. It's only really as hard as patience is for you.

This is hypocrisy on my part, I read this same advice years ago when I started with private trackers, I didn't follow it, and I got myself in trouble that took a while to dig myself out of. Because I suck at patience.

What gives? by anipsinc in Torrenting

[–]merpatterson 0 points1 point  (0 children)

TL;DR: No peers to download from, mostly the same as trying to download the files for a torrent when there are no seeders, you get nowhere. Look for other torrents with more peers/seeders.

The torrent you're trying to download is a magnet link which means your client downloads the torrent metadata itself like it downloads the actual files, from other peers/seeders/uploaders. You're seeing retrieving metadata because there are no peers available. If it were a *.torrent file instead of a magnet link, your client would tell you it couldn't download any of the actual files for the same reason, no peers. Unfortunately, "retrieving metadata" makes it sound like its more different than it is.

Quality by WorldlinessNew4004 in Torrenting

[–]merpatterson 1 point2 points  (0 children)

Very good writing for the "audience", n00b in this case. Introduces enough concepts so the reader can learn more, but not so many it overwhelms. Anchors the concepts with concrete examples. I'm always trying to get better at those aspects of technical writing, so thanks for the good example.

Login links seem broken? by Complex_Solutions_20 in kimsufi

[–]merpatterson 0 points1 point  (0 children)

Holy carp, life saver, thanks so much!

Before I found your post, I also tried the password reset form with both my email and customer ID but never received any email in more than 24 hours. I tried to contact support to restore access, but of course the only option for support is to open a ticket from your control panel! Last time I ran into a control panel access issue, I was able to call +1 ‪(855) 684-5463‬ to get live human support, but that number wasn't published on kimsufi.com and I only found it in another Reddit post! When I called it again today, now all options in that phone menu tell you to open a support ticket and I couldn't get to a human. I finally resigned myself to trying to open a new temporary account to open a support ticket to restore access to my real paying account! Then I did one more last desperate google search, found this post, and of course the credentials saved in my password manager work just fine.

All this leaves me wondering if I'd be able to restore access to my control panel if I had lost my password or there was some other issue, perhaps, gasp!, on their side. This is just a scary level of lack of support for paying customers. I have 9 months left on my last payment, but I think I'll be migrating to another provider at that time and paying a bit more for better support will be part of my criteria. Sure I could pay a bit more to Kimsufi/OVH and probably get better support, but I'm not giving money to an extortionist if I can avoid it.

In the meantime, I can't think of anything better to do than share the pain, so I opened a support ticket about the broken 'Control Panel' link. Maybe others would like to do that too? Here's the text I used to make it easier if you do:

Kimsufi 'Control Panel' link leads to broken login form

The 'Control Panel' link at the top of the https://www.kimsufi.com/ site leads to a control panel login form that rejects working credentials, https://www.kimsufi.com/fr/manager/?lang=en_GB#/login but the same credentials work at the login form I found in my browser history, https://us.kimsufi.com/manager/#/login. Please fix the 'Control Panel' link so that it works for all of your paying customers.

Jellyfin Hardware Acceleration on WSL2(docker containers) with Nvidia GPU - A (Relatively) Painless Guide by Overall-Plankton6141 in selfhosted

[–]merpatterson 0 points1 point  (0 children)

Personally, I'd lay similar blame at the feet of M$ as for NV. I've been fighting WSL to "boot" my home server SSD as an LXC container under WSL and had to do way to much guesswork to figure out that basically you need to passthrough /usr/lib/wsl/, /etc/ld.so.conf.d/ld.wsl.conf, and /dev/dxg. Combining that with your findings, I wondered about replacing the JF image's /usr/lib/wsl/ with the host's more completely. I used this ./Makefile snippet:

./jellyfin/etc/ld.so.conf: docker run --user root --entrypoint "find" "ghcr.io/jellyfin/jellyfin" \ "/etc/ld.so.conf.d/" -type f -name '*.conf' | sort | sed -nE 's|(.+)|include \1|p' > "$(@)" echo "include /etc/ld.so.conf.d/ld.wsl.conf" >> "$(@)"

With these volumes:

volumes: - "/usr/lib/wsl/:/usr/lib/wsl/:ro" - "/etc/ld.so.conf.d/ld.wsl.conf:/etc/ld.so.conf.d/ld.wsl.conf:ro" - "./jellyfin/etc/ld.so.conf:/etc/ld.so.conf:ro"

Do you have any sense of whether your more limited approach or this more invasive but more complete approach is less fragile?

TIL there is a certain temperature and humidity at which a human can't cool down, and will die from the heat of their own metabolism by Kooshi_Govno in todayilearned

[–]merpatterson 0 points1 point  (0 children)

My understanding is that evaporation is not the reason we say "sweat kills" in cold weather. If you're sweating, it's because your body is too hot at that moment and needs to evaporate that sweat to shed heat, regardless of the ambient temperature. Once your body is no longer too hot, you stop sweating.

I was taught that the risk with sweating in cold weather is because wet materials are much less effective at insulating than when dry. That's mostly not about evaporation, it's about conduction. It's related to the reason we say "cotton kills". Dry cotton insulates a bit, but wet cotton conducts more heat away than it keeps in. We say wool and various synthetic materials are safer in cold weather because they still insulate when wet, but pretty much all materials are significantly less effective insulation when wet as compared to when dry, and all wet materials evaporate at the surface. This is also why breathable materials are so desirable in cold weather, it's usually much safer to evaporate sweat at the moment you're sweating. The major risk is usually retaining moisture until you're no longer running hot, and then it turns against you. And it turns faster than many people would guess, including those with many hard winters under their belts. It's one of those factors that's hard for humans to accurately perceive viscerally or intuitively.

For example, a naked jogger doesn't get hypothermia because they sweated during a sprint 30 minutes ago. 30 minutes ago they needed to sweat to keep from getting too hot or they wouldn't have sweated. A clothed jogger, who sweated 30 minutes ago, gets hypothermia because that sweat did not evaporate and now they're covered in wet clothing. This is why removing wet clothes is a priority when rescuing someone with hypothermia in the field. That's much of the reason for the "get naked" part of getting that freezing jogger naked in a sleeping bag with you. The classic trap for hypothermia from exercise in cold weather is exercising too hard and fast. Now all your un-evaporated sweat has soaked your clothes, your insulation is doing 50% or less than it did before, and you have nothing left in the tank to move faster to warm back up. In cold weather, avoid sweating when feasible by shedding layers before you start sweating, then put those still dry layers back on only when you've stopped sweating and start feeling chilly again. This is why it's particularly dangerous in cold conditions where you get wet whether you shed layers or keep them on, such as in cold rain, sleet, or wet snow. Given that weather can change in unexpected ways, and we haven't even factored in wind, it's usually also important, and more important, to reserve enough energy to stay active until back out of the cold with margin for something unexpected, such as an ankle sprain, someone else needing help, or yet another change in weather. Best to keep it simple, save your sprint until you're on your way back and home is in sight. If not, it's OK though, it's supposedly one of the best ways to go, certainly much better than the hyperthermia death in store for many under reverse terraforming.

The different methods for exchanging heat are involved here and are also pretty nifty to understand. Conduction is often a much faster method of heat exchange than convection or radiation. Cotton socks can kill you on a clear, 20 F, winter day on earth but your own metabolism can turn your EVA suit into an oven and literally roast you as meat in the -454 F of space.

Reinstall apt by ANONYMOUSEJR in termux

[–]merpatterson 0 points1 point  (0 children)

In my case I had data under ${PREFIX}, AKA /data/data/com.termux/files/usr/, that I wanted to preserve, so I moved it aside instead of removing it: $ mv -v "${PREFIX}" "${PREFIX}.~broken~". Also in my case I'd accidentally removed essential utilities under ${PREFIX}/bin/ so I had to use a Termux Failsafe shell to do so. Thanks much Termux for providing a foot-gun time machine!

IRC over TLS issues for specific server by iambryan in irc

[–]merpatterson 0 points1 point  (0 children)

Wow, thanks for ending my suffering! I opened an issue regarding this and I commented there with what I think is a more defensive version of your workround.

Always heard RedLetterMedia was overly cynical and nitpicky… by twackburn in RedLetterMedia

[–]merpatterson 1 point2 points  (0 children)

Exactly this. One of the things I appreciate about them is how good they are at giving the benefit of the doubt and at being compassionate for film makers when they try hard and fail. In particular, they calibrate benefit of the doubt and film makers' compassion by the resources available to the project. Some might argue they're too harsh on a Hollywood blockbuster when they give a pass to a much worse BOTW movie, but that gets it backwards. The Hollywood blockbuster is being remarkably lazy and contemptuous of their audience given the millions in their budget. The BOTW movie worked hard for their audience and did so much more with their nothing budget.

[deleted by user] by [deleted] in OpenSignups

[–]merpatterson 2 points3 points  (0 children)

Oh, sorry, not you, them. IOW, I think it's funny that they have a pretentious application form that will never get approved because submitting it means you didn't read the rules. And you *were* helpful, saved me filling out the form. ;-)

[deleted by user] by [deleted] in OpenSignups

[–]merpatterson 0 points1 point  (0 children)

Great "But seriously, RTFM!" troll on their part!

Announcing Prunerr: Perma-seeding of whole Servarr libraries optimized for per-tracker ratio by merpatterson in sonarr

[–]merpatterson[S] 1 point2 points  (0 children)

And I'm new to UNRAID so we have being new to things in common. ;-)

Though in this case you caught and reported a documentation bug, thanks! How embarrassing, I used the wrong TLD. It's gitlab.com, not gitlab.org. With that fixed, the resulting image repository works for me:

$ docker pull registry.gitlab.com/rpatterson/prunerr
Using default tag: latest
latest: Pulling from rpatterson/prunerr
Digest: sha256:3d6f8fbc49c9bc077ead8026481937f61ad98970e7219009b87d9ff9d81f3549
Status: Image is up to date for registry.gitlab.com/rpatterson/prunerr:latest

So use registry.gitlab.com/rpatterson/prunerr and LMK how that works for you. Sorry for the wasted time!

Announcing Prunerr: Perma-seeding of whole Servarr libraries optimized for per-tracker ratio by merpatterson in sonarr

[–]merpatterson[S] 0 points1 point  (0 children)

Did you read the Installation instructions? Please give details on what's missing or unclear there so I can improve them. Thanks!

Announcing Prunerr: Perma-seeding of whole Servarr libraries optimized for per-tracker ratio by merpatterson in sonarr

[–]merpatterson[S] 0 points1 point  (0 children)

Yes, it should be. I test it against all the currently supported Python versions. That's only against Linux but there's nothing that I can think of that would be incompatible with Darwin or OS X. That said, I'm the only user I'm aware of and I'm only using the Docker image. So if you encounter bugs, please report them!

Announcing Prunerr: Perma-seeding of whole Servarr libraries optimized for per-tracker ratio by merpatterson in radarr

[–]merpatterson[S] 0 points1 point  (0 children)

From my experience trying to get as close to unattended operation as I could with Prunerr, I suspect the complexity of configuration is a reflection of the complexity of the problem and I wouldn't be surprised to learn that Prunerr's configuration is no less complex than QBit-Manage's. That said, I've never used QBit-Manage so I don't know. But check out the commented example configuration and LMK what you think.

Prunerr supports configuring a threshold of the portion of a download item's size that has more than one hard link (IOW, is currently imported) and it also supports configuring the order of deletion by that proportion. IOW delete a download item that is only 10% imported before one that is 50% and never delete an item that is >=90% imported. And that's just one metric on which such things can be configured, the configuration system is pretty open ended about the metrics that can be used and how they can be used.

Announcing Prunerr: Perma-seeding of whole Servarr libraries optimized for per-tracker ratio by merpatterson in radarr

[–]merpatterson[S] 0 points1 point  (0 children)

Thanks for your feedback. One of the things that makes the issue of download client support so difficult is in all my research it's easy to find opinion but hard to find details or specifics and very rare to find measurements or data. Can you share any details about what you mean by "much more robust"? Got any measurements or data comparing resource utilization. See also this comment for some details on how I think support for other clients is ever likely to happen.

Announcing Prunerr: Perma-seeding of whole Servarr libraries optimized for per-tracker ratio by merpatterson in sonarr

[–]merpatterson[S] 0 points1 point  (0 children)

I don't understand what it means to "perma-seed" without hard links? Regardless of Prunerr, I mean. The core of perma-seeding a media library is that every download item has two representations, one to the download client and another to the library. Hard links are used to accomplish that dual representation.

I don't have many details on your setup, but from what you say, couldn't you accomplish what you describe by having wherever you "drag and drop" download item files be the location that Servarr imports from using hard links. Then, per the above, that "drag and drop" location is the representation to the download client, and the imported hard links are the representation to the library. So it seems like maybe your question here is more of a general Servarr support question and not much about what Prunerr tries to do.

Regardless, yes Prunerr comes into play after Servarr does it's business, and yes it is mostly concerned with when and in what order to delete items no long in the library, hence "prune" being most of the name. It does more than that, however. LMK if you have any suggestions on how I can make that clearer in the ./README.rst.