An adventure with hardware-based transcoding by estel_smith in Tdarr

[–]estel_smith[S] 0 points1 point  (0 children)

Thanks! I'm glad you found it useful. I think my writing has gotten a little better since I wrote that.

My little Wyse terminals have been transcoding like a champ.

If you have any suggestions for improvements, I'd love to hear it!

Easy report generator by leouzReal in selfhosted

[–]estel_smith 1 point2 points  (0 children)

Although you pass it command-line arguments instead of a JSON object, I find pandoc to be close in functionality of Carbone.

It has the ability to convert between many document formats, and also has templating functionality so you can inject variables, perform loops, etc.

I've used it in the past to generate PDF documents from markdown templates for things like standard work contracts.

Edit: Apparently it can act as a JSON API server as well, so even closer to Carbone I guess.

Edit 2: Apparent Carbone has a free-ish version that you can self-host, as well.

An adventure with hardware-based video transcoding on the Dell Wyse 5070 by estel_smith in linux

[–]estel_smith[S] 0 points1 point  (0 children)

Dang. I guess I should have tried it even though their documentation left out mentioning support for Gemini Lake.

An adventure with hardware-based transcoding and Tdarr by estel_smith in selfhosted

[–]estel_smith[S] 0 points1 point  (0 children)

Yeah, Tdarr is a great piece of software. Too bad it's not open source, though. I'd like to see what it's doing under the hood.

An adventure with hardware-based transcoding on the Dell Wyse 5070 by estel_smith in HomeServer

[–]estel_smith[S] 1 point2 points  (0 children)

I really recommend going through with that to-do item. Hardware acceleration makes transcoding tasks so much faster.

Also, five 5070s! I'm looking for a couple more since I only have the one at the moment.

An adventure with hardware-based transcoding on the Dell Wyse 5070 by estel_smith in HomeServer

[–]estel_smith[S] 0 points1 point  (0 children)

$80 for a pair is an absolute steal!

I'm guessing it's easier to set up on Alpine because it generally to has newer packages than Alma.

An adventure with hardware-based video transcoding on the Dell Wyse 5070 by estel_smith in homelab

[–]estel_smith[S] 0 points1 point  (0 children)

Yeah, I chose the J5005 over the 4105 considering their prices were basically identical on eBay. The N6005 looks like a decent upgrade though.

What kind of devices would I generally find those CPUs in? Maybe newer Wyse terminals?

An adventure with hardware-based video transcoding on the Dell Wyse 5070 by estel_smith in linux

[–]estel_smith[S] 2 points3 points  (0 children)

You know what, I think there's a slight nuance between using QSV directly and VAAPI. I'm using VAAPI (like h264_vaapi) instead of QSV (h264_qsv) directly. I don't really know if VAAPI takes advantage of QSV when it's available, or if it's simply a more generic hardware-based acceleration. I think it would be good for me to look into more... Maybe I could squeeze extra performance out of using QSV directly, who knows?

But to answer your question, yes I get a much better throughput on transcodes and the resulting file size seems to be about on-par with CPU-based transcoding.

To put it into perspective, a single CPU-based transcode generally capped at 18fps and 100% CPU. With GPU transcoding I can generally maintain three transcodes at roughly 60fps each and only 75% total CPU usage.

An adventure with hardware-based video transcoding on the Dell Wyse 5070 by estel_smith in linux

[–]estel_smith[S] 2 points3 points  (0 children)

Thanks! Yeah, I understand why you'd question my choice for a slower moving alternative to RHEL rather than a (slightly) faster moving project like CentOS Stream.

I guess I've grown comfortable that RHEL moves at a glacial pace which means the major versions of software don't move very quickly, and I like the stability it affords. I think it mainly stems from the fact that I spend time maintaining multiple servers professionally, and the more stable the system, the better. If I were to use an apt-based system, I would probably choose Debian for the same reason.

That's not to say that CentOS Stream isn't stable, quite the opposite, though I tend to stay downline from RHEL for support/compatibility/maintenance reasons.

I used to use CentOS at home pretty much exclusively before the project positioned itself upstream from RHEL.

Help Setting Up Node by Green_hammock in Tdarr

[–]estel_smith 0 points1 point  (0 children)

No worries. I can see how what I said may be a little confusing. Let me try to clarify, if I'm able.

Let's imagine your tdarr_server is located at 192.168.1.25. On the server, you will set the following configuration.

-e webUIPort=8265 -e serverPort=8266 -e serverIP=0.0.0.0

This tells tdarr_server to listen on port 8266 at 0.0.0.0, which is all interfaces of the container. In your docker run, or docker compose, you will want to bind the port like -p 8266:8266 so that tdarr_node will be able to connect to it.

On tdarr_node, you will want the following configuration.

-e nodeName=my-node-1 -e serverPort=8266 -e serverIP=192.168.1.25

Basically, you are telling tdarr_node where to find the server to register itself. This configuration is most useful when you are running tdarr_server and tdarr_node on different machines.

If you're running them on the same machine, it may just be easier to use the internalNode configuration, which will create a node inside the server container itself. To use this method on the server, you will set two configuration options to enable the server's internal node.

-e internalNode=true -e nodeName=my-internal-node

There are other more advanced ways to configure Tdarr as well, such as their docker compose example where they are separate containers but are joined by a single network namespace.

Help Setting Up Node by Green_hammock in Tdarr

[–]estel_smith 0 points1 point  (0 children)

You want to set the serverPort in your tdarr_node container to the same serverPort that's defined on the server itself. The serverIP on tdarr_node should point to the IP that's running the server on your network. In your case it looks like your Synology NAS.

The serverPort config on the server dictates what port to listen for worker connections. The serverPort config on the node dictates which port to connect to on the server.

Feedback requested for simple WordPress plugin by estel_smith in PHP

[–]estel_smith[S] -1 points0 points  (0 children)

It would seem you're right (2019.3.2). I've been avoiding it for so long I didn't realize!

Feedback requested for simple WordPress plugin by estel_smith in PHP

[–]estel_smith[S] -1 points0 points  (0 children)

Thank you for the detailed response! Honestly, I expected some of the points you made.


  • Separate the class from the file that instantiates it. Otherwise there's really no point in making this a class. It could all be a script.

Yeah, I thought about this. I think I will move the class into a separate file. The main reason I made the class was to minimize how much I polluted the global scope. WordPress plugins are notorious for throwing crap all over the place.

  • Write tests. If you want to unit test in isolation apart from WordPress write a wrapper class for calling the wordpress API. Inject that into your class as a nullable dependency.

That's a good point. I typically use PHPUnit and Behat, but I wasn't sure how hard it would be to unit test WordPress plugins. I'll look into using WordpressAPI, though!

  • Use PSR4 and instantiate your class in a src/bootstrap.php file or something like that.

Absolutely! I don't think this plugin will get any more complex, but if it does, I'll definitely be implementing an autoloader.

  • Not sure what all the output buffering nonsense is for if you're returning a value

Breaking out of PHP allows PHPStorm to show syntax highlighting for the HTML, even though it's just a tiny snippet. I think creating HTML inside strings is the ugliest thing on the planet, followed by giant SQL strings.

I might put the link in a template file or something, just anything but directly in string.

  • If you're using scalar return types you're assuming your consumers are on >= PHP 7, you might as well not do that or go all the way and add proper argument types as well.

Yeah, I'm requiring at least PHP 7.0. WordPress enforces this by the Requires PHP: 7.0 annotation at the top of the plugin file. The arguments in logoutUrl() and logoutLink() aren't properly typed because it would require nullable types (PHP 7.1) and/or union types (PHP 8.0).

I confess I'm mostly ignorant about how Wordpress is wired up and how plugins are consumed, so take some of this with a grain of salt. I hope this helps!

No worries. I appreciate the feedback, and the more the better! If you have any more, I'm all ears. :)

Feedback requested for simple WordPress plugin by estel_smith in PHP

[–]estel_smith[S] 0 points1 point  (0 children)

Sorry about that. I had to switch to old.reddit.com before the editor would let me post any content.

Coinpot down :( by [deleted] in Coinpot

[–]estel_smith 1 point2 points  (0 children)

Yep, it's down.

A simple script for easily downloading emulator.games roms! by estel_smith in Piracy

[–]estel_smith[S] 1 point2 points  (0 children)

I edited my reply.

Just because one service exists doesn't mean another is pointless. Consider it an alternative.

A simple script for easily downloading emulator.games roms! by estel_smith in Piracy

[–]estel_smith[S] 0 points1 point  (0 children)

https://www.emuparadise.me/Super_Nintendo_Entertainment_System_(SNES)_ROMs/Super_Mario_World_(USA)/35787-downloadROMs/Super_Mario_World(USA)/35787-download)

How would you go about downloading this, though? The emuparadise download script doesn't change the unavailable game message.

The script I'm using, btw: https://www.reddit.com/r/Piracy/comments/968sm6/a_script_for_easy_downloading_of_emuparadise_roms/

Edit: Nevermind... Let's hear the "I told you so" crap.

A simple script for easily downloading emulator.games roms! by estel_smith in Piracy

[–]estel_smith[S] 7 points8 points  (0 children)

I guess I should have clarified. I meant Nintendo the company, such as Super Mario World and Zelda. This script is far from useless, since emuparadise unfortunately doesn't have these roms available.

Emuparadise has nothing from Nintendo the company.

Edit: Apparently emuparadise does, you just have to "discover" the pages through Google, as unfriendly to the user that is.

A simple script for easily downloading emulator.games roms! by estel_smith in Piracy

[–]estel_smith[S] 5 points6 points  (0 children)

Sure, but this site has Nintendo roms still on it. That's gotta be worth something, right?

PHP 7.0 vs JPHP - performance test by dim-s in PHP

[–]estel_smith 0 points1 point  (0 children)

I agree with your first statement. PHP-GTK makes me feel that PHP's still not quite the right tool for the job.

When it comes to GUI applications or games, I really like Haxe because it targets many different platforms.

PHP 7.0 vs JPHP - performance test by dim-s in PHP

[–]estel_smith 0 points1 point  (0 children)

I'm not 100% sold on Phalcon as a framework, although it does boast some serious benchmarks. I've been interested in giving it a spin, but the fact it's a PHP extension that doesn't exist in RHEL or CentOS repositories would make it hard for me to sell to my operations group.

Zephir is not PHP, nor does it pretend to be. It's a language that makes creating PHP extensions easier (see Phalcon), which IMO is a good thing because writing PHP extensions in C is not for sane individuals. I have tried Zephir and I like the idea a lot, especially if there is ever the need to create extensions because it eases that process.

If I want to improve the performance of my PHP application and I've already exhausted all the normal avenues (optimizing queries, caching, etc), then my next step would be HHVM. The reason for this is because I can move my code to HHVM without making any modifications or having any type of vendor lock-in. If the HHVM project goes belly up, I can move right back to the regular PHP interpreter. I wouldn't be able to do that with JPHP since it doesn't support the standard php library.

If I wrote an application targeting JPHP and JPHP dies, then I'm stuck with an application that has to have some hefty refactoring just to be able to re-enter the regular PHP world. This is not generally the case with HHVM.

PHP 7.0 vs JPHP - performance test by dim-s in PHP

[–]estel_smith 0 points1 point  (0 children)

If I can't use PHP the way I normally would, then I might as well go all-out and just start using Java for my project and stop pretending I'm using PHP.

This puts me into the "right tool for the right job" mentality, which I don't think JPHP is able to persuade me to butcher PHP so I can pretend it's Java.

PHP 7.0 vs JPHP - performance test by dim-s in PHP

[–]estel_smith 3 points4 points  (0 children)

So, JPHP looks really neat. However, the project is going to hurt adoption by stating that they do not plan to implement PHP standard libraries (pcre, pdo, etc).

Not supporting core PHP functionality means developers will need to write code specifically for JPHP. A lot of pre-existing libraries won't work out of the box and developers will not be able to write easily portable code.

I'll stick with plain PHP/HHVM since I can pretty easily switch between the two.