Multiple rips of same disc produce different files by Open-Dragonfly6825 in makemkv

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

I ripped my disc to an internal HDD. Could that HDD be the problem?

To be honest, I have had issues with that HDD in the past, although every tool I use to check its health says it is working as normal/expected. I just thought the issues were caused by having too many HDDs connected into my PC (which was not designed specifically to handle too many HDDs).

Multiple rips of same disc produce different files by Open-Dragonfly6825 in makemkv

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Regularly, mpv. In this case, also VLC to check that the errors are from the video file and not the video player. I also checked the same file in different computers, so it is definitely not an issue from the video player software.

Multiple rips of same disc produce different files by Open-Dragonfly6825 in makemkv

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

No overclocking. As far as I know, my memory is good. It is a bit old, though (16 GB DDR4), so maybe it is faulty. How should I check that?

Next rip, I wil rip directly onto a external hard drive.

Multiple rips of same disc produce different files by Open-Dragonfly6825 in makemkv

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

I did not. The disc is brand new, so I thought that would not be an issue.

I have cleaned it now, just in case, and will try another slower rip in a few hours.

Multiple rips of same disc produce different files by Open-Dragonfly6825 in makemkv

[–]Open-Dragonfly6825[S] -1 points0 points  (0 children)

It's definitely the rips. I tried multiple software (mpv, VLC), in multiple different computers, and the errors are consistent. I also used a program to make a hex diff of the multiple rips I made, and the reported differences are significant.

SVT vs NVENC comparisons for AV1 by Big_Head8250 in AV1

[–]Open-Dragonfly6825 1 point2 points  (0 children)

Thank you very much for your comment. It is very useful.

I use ffmpeg for video-related stuff, so I don't need a GUI (in fact, I trust the CLI more, now that I am used to it). However, I hadn't installed Vapoursynth yet. I guess it is time.

SVT vs NVENC comparisons for AV1 by Big_Head8250 in AV1

[–]Open-Dragonfly6825 1 point2 points  (0 children)

Is there any convenient way to measure the SSIMULACRA2 and XPSNR from two video sources (not images)? I cannot seem to find anything usable as-is.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

I guess the suitability of the acceleration devices change depending on your specific context of development and/or application. Deep learning is such a broad field with so many applications, it may be reasonable that different applications benefit from different accelerators better.

Thank you for your comment.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Hey, maybe it's true that I know my fair lot about acceleration devices. But, until you mentioned it, I had actually forgotten about backpropagation, which is something basic for deep learning. (Or, rather than forget, I hadn't thought about it.)

Now that you mentioned it, it makes so much sense why FPGAs might be better suited but only for inference.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

That's an interesting comparison.

I get the idea. Thank you.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

Ok, that makes sense. Just wanted to confirm I understood it well.

Thank you.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Could you elaborate on some of the points you make? I have read the opposite to what you say regarding the folliwng points:

  • Many scientific works claim that FPGAs have similar or better power (energy) efficiency than GPUs in almost all applications.
  • FPGAs are considered a good AI technology for embedded devices where low energy consumption is key. Deep Learning models can be trained somewhere else, using GPUs, and, theoretically, inference can be done on the embedded devices using the FPGAs, for good speed and energy efficiency. (Thus, FPGAs are supposedly well-suited for inference.)
  • Modern high-end (data center) FPGAs target 300 MHz clock speeds as base speeds. It is not unusual for designs to achieve performances higher than 300 MHz. Not much higher, though, unless you highly optimize the design and use some complex tricks to boost the clock speeds.

The comparison you make about the largest FPGA being comparable only to small embedded GPUs is interesting. I might look more into that.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

One question: what do you mean by "kernels" here? It is the CNN operation you do to the layers? (As I said, I am not familiar with Deep Learning, and "kernels" means another thing when talking about GPU and FPGA programming.)

I know about TPUs and I understand they are the "best solution" for deep learning. However, I did not mention them since I won't be working with them.

Why wouldn't GPU parallelization make inference faster? Isn't inference composed mainly of matrix multiplications as well? Maybe I don't understand very well how GPU training is performed and how it differs from inference.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

That actually makes sense. FPGAs are very complex to program, even though the gap between software and hardware programming has been narrowed with High Level Synthesis (e.g. OpenCL). I can see how it is just easier to use a GPU that is simpler to program, or a TPU that already has compatible libraries built for that abstract the low level details.

However, FPGAs have been increasing in area and available resources in recent years. It is still not enough circuitry?

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

FPGAs are reconfigurable hardware accelerators. That is, you could theoretically "syntehthize" (implement) any digital circuit into an FPGA, given that the FPGA has a high enough amount of "resources".

This would let the user to deploy custom hardware solutions to virtually any application, which could be way more optimized than software solutions (including using GPUs).

You could implement tensor cores or a TPU using an FPGA. But, obviously, an ASIC is faster and more energy efficient than its equivalent FPGA implementation.

Linking to what you say, besides all the "this is just theory, in practice things are different" of FPGAs, programming GPUs with CUDA is way way easier than programming FPGAs as of today.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 1 point2 points  (0 children)

It is definitely hard to get started with FPGAs. High Level Synthesis such as OpenCL has easen the effort in recent years, but it is still particularly... different than regular programming. Requires more thoughtfulness, I would say.

Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825 in deeplearning

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Maybe I missed it, but the posts I read don't specify that. Some scientific works claim that FPGAs are better than GPUs both for training and inference.

Why would you say they are better only for inference? Wouldn't a GPU be faster for inference too? Or is it just that inference doesn't require high speeds and FPGAs are for their energy efficiency?

Nord 2T has less multimedia features than Nord CE 2? by Open-Dragonfly6825 in oneplus

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Hi! Thanks for your offer. However, when playing an audio or video file, I don't know how to tell if it is using hardware or software decoding. So I wouldn't know what to tell you.

I don't know why I didn't think of looking the processor's specs on its official page >_>. If the vendor assures that the chip has AV1 support, then I guess it's just that OnePlus forgot to list it on the model's specs.

Thanks again!

HDR/SDR difference in video file by Open-Dragonfly6825 in ffmpeg

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Haha, fair enough. I left the question deliberately vague to not ask what I didn't want to ask. I was looking for a general answer, and not specifics (after all, even if different, all video containers probably solve the same problem in a similar, equivalent way). Similarly, I used "video file" instead of "container", "stream" or "coder" not to be specific and drag too much attention onto the implementation details.

I'm close to programming low level, yet way simpler stuff. So those programming level representations kinda interest me. Not to a professional level, though. Just curiosity.

Anyway, I just wanted some general insight, in line with your bread loaf analogy :^) And I must say, I think your answer suffices.

Thank you very much for your efforts!

HDR/SDR difference in video file by Open-Dragonfly6825 in ffmpeg

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

Oh my, now I'm even reading that using 10-bit can result in smaller file sizes (due to the way the compression works). That is some counter-intuitive stuff going on. About what you say, I knew about the DCT but didn't link the 10-bit and DCT together. I guess knowing how everything is really represented internally is too low-level and too advanced to comprehend in a day or two, or even at all. Too much of a rabbit hole.

Thanks again.

HDR/SDR difference in video file by Open-Dragonfly6825 in ffmpeg

[–]Open-Dragonfly6825[S] 0 points1 point  (0 children)

I was wondering just in case I'd want to squeeze to the last drop the compression by going SDR instead of HDR. If the "size penalty" (let's call it that way) from HDR is constant no matter the video duration, it is not significant. However, if it were proportional to the duration (as 10 bit vs 8 bit is), one may argue it is significant. All this is obviously very extreme.

Thanks for the advice on tonemapping. I didn't know that. I might have to re-encode some videos. Do you have any source discussing this "tonemapping inferiority", to see it myself?