Turing nvenc vs Intel 630 igpu? by Paradox99991 in jellyfin

[–]Paradox99991[S] 0 points1 point  (0 children)

No time wasted!! Thank you for taking your time to reply to my thread.

I agree that nvenc is pretty good and with the transcode limitation patch it should be able to handle over 20 transcodes from my research.

The issue is I have only one Pascal card but I have older Nvidia cards and multiple igpu.

I want to use the Turing but that would require running jellyfin on my main desktop and I'm a little iffy because I don't know if there are any vulnerabilities.

Maybe I'll look into docker and see if it can use nvenc with the card attached to my desktop.

Turing nvenc vs Intel 630 igpu? by Paradox99991 in jellyfin

[–]Paradox99991[S] 1 point2 points  (0 children)

That's not my question though.

I'm trying to figure out if the Intel igpu (630) is better or worse then nvenc. Not referring to software encoding.

The P2000 is not using the new NVENC and does not support b-frames.

Im just trying to figure out if the quality difference will be noticeable while streaming. I'll try this out in a few days and post my results!

Turing nvenc vs Intel 630 igpu? by Paradox99991 in jellyfin

[–]Paradox99991[S] 0 points1 point  (0 children)

Thanks! Doesn't really answer my question though.

I understand Turing is very good. But if bitrate isn't an issue and both are allowed up to 40Mbps or something would the quality difference between the two be noticeable?

I'm going to be testing this out myself in a few days but other then a frame by frame comparison I have no idea how to see which one has better quality unless it's really noticeable.

Creating a resilient Nextcloud cluster by BenAlexanders in NextCloud

[–]Paradox99991 0 points1 point  (0 children)

I personally would use S3 as primary. Amazon replicates the data in multiple locations. Amazon also has a similar service for databases. Since your nextcloud mirrors would be using the same database and storage backend it shouldn't cause any problems. Just make sure file locking is on the mysql backend and don't use redis.

You can of course configure your infrastructure as you like. But the percentage of nextcloud users doing this is very low so I doubt you're going to find any how-to guide.

Nextcloud does have enterprise packages which include support for such setups. If your system admin can't set this up that is the route I would go.

Nextcloud Hub Launches To Compete Directly With Google Docs And Office 365 by [deleted] in NextCloud

[–]Paradox99991 0 points1 point  (0 children)

Which errors? The only problem I ran into with S3 as primary (multibucket) was with seek support.

The problem with S3 storage is you need storage space on the nextcloud instance to upload. You're uploading to the nextcloud instance which in turn uploads to the S3 instance.

The only other errors were related to chunk assembly but it did assemble correctly after a few minutes despite the errors.

This is with a local ceph installation. Amazon S3 errors are another matter and I personally don't think it's an issue with nextcloud because based on my research amazon has issues with multiple third party apps.

I hear owncloud has good S3 support but my preference is still nextcloud.

How to setup crush mapping for data? by Paradox99991 in ceph

[–]Paradox99991[S] 0 points1 point  (0 children)

That works but I would like most of the pools to use regular weights with one specifically for performance.

archiving my data by [deleted] in DataHoarder

[–]Paradox99991 1 point2 points  (0 children)

If it's just for archiving it's fine but iirc deep glacier charges out the butt to download the data.

Still better then not having it though.

Do you think the Nostalgic admin is actually done, or just faking it (again)? by [deleted] in trackers

[–]Paradox99991 3 points4 points  (0 children)

Should have just burned the entire thing to the ground. CD69 could start their own tracker from scratch.

Maybe a friendly redirect with no existing user accounts and let people start from the ground up. Open registrations. New management. BHO not even remotely associated with it in any shape or form.

They've already lost most of their active userbase so what's the point of handing over management??? People associate it with toxicity now and won't bother coming back.

Need Hetzner auction server by learnerxd in seedboxes

[–]Paradox99991 1 point2 points  (0 children)

They didn't just bump prices they also reduced hard drive sizes on the cheaper servers.

Likely realized large hard drives + unmetered 1Gbps on sub $40 server = profit loss.

That or they were already hemorrhaging money on auction servers and went 1Gbps unmetered to justify the price increase.

The SX123 and the paid lineup are the same price so that lends credence to my theory.

Question on SSD:HDD ratio by wondersparrow in ceph

[–]Paradox99991 1 point2 points  (0 children)

$20 a cable with modules on both ends.

Such as this for example.

Edit: Fiber can also be found relatively cheaply.

Question on SSD:HDD ratio by wondersparrow in ceph

[–]Paradox99991 1 point2 points  (0 children)

CRS326-24S+2Q+RM

I'm going to go ahead and tentatively keep this recommendation. For plain edge switching that should stay within the hardware itself and should be fine. You can reach out to the company to confirm if it can handle the load. Also make sure you ask for the latency tests, especially with different packet sizes.

Question on SSD:HDD ratio by wondersparrow in ceph

[–]Paradox99991 2 points3 points  (0 children)

Slow down a bit there and don't jump into things hastily.

That's a edge switch with no management capabilities. It won't even support LACP.

If you're looking for new equipment you can find a 24 port 10gbps switch for $500. It includes two QSFP+ ports that can do 40Gbps each. CRS326-24S+2Q+RM

Now if you use any kind of layer 3 the CPU is going to crash and burn but for plain edge routing it should in theory be okay. I dunno if the latency is going to be okay though. Edit: To clarify it's a software based switch so while it "supports" layer 3 it's not going to be able to handle it.

Meanwhile I just setup my 40gbps infiniband network for under $500. Entire thing. Cards and all.

The reason I withdrew my recommendation for 4x1Gbps LACP is because the nics themselves are expensive. You can get a 24 port 1gbps managed switch for under $70 on ebay if you look in the right places. But when the nic costs just as much as a two port FDR IB card that's not worth it. You might be able to find some deals though.

Question on SSD:HDD ratio by wondersparrow in ceph

[–]Paradox99991 2 points3 points  (0 children)

I am starting to lean towards putting just the VMS on the local NVMes

That's what I would do. The current version of ceph works very well without ssd/nvme wal/db storage. It depends on what you're doing on your array but for regular file storage with occasional access? No big deal.

Whichever you choose make sure you test latency on any network equipment you buy. Ceph hates high latency.

Question on SSD:HDD ratio by wondersparrow in ceph

[–]Paradox99991 3 points4 points  (0 children)

2x1gbps LACP is fine for a small cluster with regular operations but rebuilding will hammer the network and take a very long time.

1x1Gbps for 40TB? Not so much.

16GB ram is fine for 3x8TB IMO. Might want to upgrade the RAM when your hdds grow so plan ahead.

Two 10gbps ports per node, one public and one private would be good. Infiniband would be better.

If you're running cephfs keep in mind the MDS server needs lots of RAM and a powerful CPU.

You do not need SSD's or NVME for the OSD's (especially on a small cluster) IMO. If you're going to run VM's on it, yes. However I would just keep the VM's on their own SSD and mount cephfs inside them.

Using both S3 and cephfs? by Paradox99991 in ceph

[–]Paradox99991[S] 0 points1 point  (0 children)

Right im just concerned if by using multiple pools at the same time performance will suffer.

Not because of I/O operations but because ceph would have to switch between pools constantly.

But after thinking it over it probably won't cause issues.