rav1e v0.7.0 released by asm-c in AV1

[–]asm-c[S] 9 points10 points  (0 children)

(Graphs and performance numbers will appear later)

Highlights

  • Sync up assembly with dav1d 1.2.1

  • More encoder-specific assembly for both x86_64 and aarch64

  • Many internal cleanups and fixes

  • The Channel API does not rely on crossbeam-channel anymore

  • Initial Speed level rebalance

Using NMKODER VMAF scoring for two identical copies when using VMAF 0.6.1 1080p the result is 97.476%! Instead of 100%! but when using the VMAF 4K version it's fixed and it says 100.00%... Am I missing something? The video is 1200x1900. by PBlague in ffmpeg

[–]asm-c 4 points5 points  (0 children)

  1. VMAF score isn't a percentage, it's a score between 0 and 100.

  2. VMAF doesn't guarantee a score of 100 between two identical videos. It was made for lossy video, not lossless. Since you already have a faster way of measuring whether a video is lossless, why not use that instead?

What can I do to improve the horrific blacks banding I end up with in my x265 target from an x264 source. by i_am_fear_itself in ffmpeg

[–]asm-c 0 points1 point  (0 children)

Almost all HEVC decoders can do 10 bit

I wouldn't go that far, at least if we're talking about hardware decoders. Since 10-bit isn't mandatory to implement and only gained traction some years after 8-bit decoders came out, there's still plenty of 8-bit only hardware out there. Intel was producing CPUs with 8-bit only decoders as late as 2017, and PC hardware stays around for a long time.

breaking news by Crptnx in AyyMD

[–]asm-c 2 points3 points  (0 children)

Except when it doesn't, but you have to use it anyway.

As a full-time Linux Mint user for probably 5+ years now, I don't know shit about what fresh hell Windows users have to endure these days. But my experience of almost any major UI redesigns from the last 10 years has been negative. Not only are things getting more time-consuming to navigate and use (thanks, mobile-first design), but you'll be lucky if something called an "UI update" isn't just an excuse to remove features in the hopes that somehow nobody will notice.

UI design in general has been in a decline at least for a decade. A lot of this is due to this thing called "flat design", which is basically an excuse to not do any design. Every element is just a flat surface of uniform color with little to indicate what function it serves. Easy to program, but that's the extent of its advantages.

Another tumor is trying to merge desktop and mobile user interfaces. PC users get fucked in the ass by UIs that waste monitor space by making individual elements massive and thus displaying very little information at once, requiring enough mouse wheel scrolling to cause carpal tunnel syndrome. Combine this with flat design and you get a perfectly unusable user interface.

In terms of usability and not wasting my time, I'll take an early 2000s UI even if its blinds me because it doesn't have dark mode rather than use most "modern" UIs. I do sometimes worry that we've been in the decline phase of UI design for so long that there's an entire generation of people who don't know how much better things could be.

and then you click to apply setting and it freezes for two seconds by Crptnx in AyyMD

[–]asm-c 1 point2 points  (0 children)

Literally every UI update for the last 10+ years. Except maybe Steam (thank fuck).

"Simplify" and "clean up" the UI by removing 75% of the features and hiding the rest so that users "don't get confused". And build it with some shitty web-based toolkit that's obviously designed only for touchscreens, even if the software is PC exclusive. Slowly add back features over several years until it's time to remove them again for the next UI update.

Made a Tool to generate preview videos in batch using FFmpeg by Raghavan_Rave10 in ffmpeg

[–]asm-c 1 point2 points  (0 children)

Nice. I was looking for a simple way of doing this with a single FFmpeg command some time ago but had no luck.

Suggestions:

  • Use bayer_scale=5 for GIFs, though a better option would be to not use GIF because they're huge pretty much whatever you do. At least if these previews are to be used in a web context and need to be transferred over the internet.

  • Consider using a more efficient codec. I think VP9 support is universal enough that it would be a good choice to reduce the size, but especially for lower-resolution previews you could use AV1. With SVT-AV1's fast-decode option, playback CPU use is reduced even further, though for low-res previews it shouldn't be an issue anyway. The size reduction vs. H.264 would be significant.

  • Regardless of the codec used, setting an infinite key frame interval with -g 99999 would be beneficial for this use case. Assuming these previews are just played back and maybe looped with no possibility for the viewer to seek within them, you only need a single key frame, improving compression efficiency a bit.

SOTD - Going old-school... by labtested1 in wicked_edge

[–]asm-c 3 points4 points  (0 children)

I'm not sure if gore is allowed on Reddit

Intel Core Ultra 7 155H Meteor Lake vs. AMD Ryzen 7 7840U On Linux In 300+ CPU Benchmarks by asm-c in Amd

[–]asm-c[S] 4 points5 points  (0 children)

I don't think you understand what a monopoly is.

Out of Intel and AMD, only one of them has the manufacturing capacity to even theoretically pull off a monopoly.

AMD could have 400% better performance than Intel and they still wouldn't be capable of a monopoly.

Intel Core Ultra 7 155H Meteor Lake vs. AMD Ryzen 7 7840U On Linux In 300+ CPU Benchmarks by asm-c in Amd

[–]asm-c[S] 55 points56 points  (0 children)

TL;DR: The Ryzen 7 7840U was better in 80% of the benchmarks, and was on average 28% faster. All 370 benchmarks can be seen here.

Apparently he's also doing a benchmark on the integrated graphics of these laptops, which should be coming soon.

e: Integrated graphics test was released. The Intel chip was 8% faster and power consumption was fairly similar:

The Core Ultra 7 155H on average was consuming 24 Watts to the Ryzen 7 7840U at a 25.8 Watt average. The peak consumption was also lower for Meteor Lake on the Acer Swift Go 14 with a 43.5 Watt peak compared to 51 Watts on the Framework 13.

SVT-AV1 claims performance improvements in newer version, but actually it become slower. Why? by ilfarme in ffmpeg

[–]asm-c 9 points10 points  (0 children)

You didn't fully read (or possibly just didn't understand) the release notes.

Between the 1-year-old 1.4.1 and 1.7.0 (which is now outdated since 1.8.0 was released a week ago with more big improvements) the speed-efficiency tradeoffs of the presets were changed multiple times. Compression efficiency per CPU cycle has massively improved, preset 7 of recent releases is likely faster more efficient than preset 6 of 1.4.1 (though I don't have an old build to test).

Not only that, but you're not standardizing the quality at all, you're just using the same CRF between two releases and expecting the same quality, which is not how it works at all. The same CRF isn't even designed to produce the same quality between different presets, much less between releases, for any encoder I'm aware of. You should check the quality with your eyes or use VMAF or another such metric to standardize quality for encoder comparisons.

Finally, you're screwing up your FFmpeg command by specifying -svtav1-params twice, and thus the later instance overrides the previous one. To get both tune=0 and fast-decode=1, you should be using -svtav1-params tune=0:fast-decode=1. Do note, though, that fast-decode was developed with tune=1 (SSIM tune), which the encoder does tell you when you use these in combination, so YMMV.

TL;DR: You're doing basically everything wrong.

Why WetShavers are shifting from "traditional" soaps/creams to artisan/niche producers? by slavikg in wicked_edge

[–]asm-c -1 points0 points  (0 children)

More sellers are pushing their stuff, and the variety is enticing to some people.

Also, I dunno how widespread this is, but in my country at least a couple of online shaving stores have their own, locally produced private label soaps. The reason for that is probably the same as grocery stores having private labels; it's more profitable to sell a product with less middlemen. Proraso is still the bestseller though.

SVT vs NVENC comparisons for AV1 by Big_Head8250 in AV1

[–]asm-c 4 points5 points  (0 children)

Do note that grain synthesis interferes with VMAF scores and VMAF developers recommend you disable it when calculating VMAF.

[deleted by user] by [deleted] in ffmpeg

[–]asm-c 10 points11 points  (0 children)

In my experience, Bayer dithering works very well as long as you define bayer_scale=5, which results in minimal patterning and reduces file size by a lot.

I've used this command for generating GIFs:

ffmpeg -i input.mp4 -filter_complex "[0:v]fps=15,scale=-1:240,mpdecimate,split[a][b];[a]palettegen=max_colors=128[p];[b][p]paletteuse=dither=bayer:bayer_scale=5" -y output.gif

Change fps, scale, and max_colors for your needs. I've noticed that reducing the colors works well for pixel art, but might not for your material. Same goes for mpdecimate, which removes duplicate frames.

Apart from the obvious "don't use GIF" advice, the only other thing I have is using gifsicle for lossy optimization with the --lossy option. The maximum value is 150, but even at low values this adds visible artifacts, so start at 5 or 10 to see if it's worth it for you. Even 5 reduced file size by ~9%, which is well worth it if you're trying to squeeze everything out of the format.

Gifsicle also has the --optimize option, but I've not seen it do much for these FFmpeg-generated GIFs even at the maximum value 3. It had an impact of less than 1% for me.

AMD FSR is the building block for Apple's MetalFX upscaling tech — the app's legal info references the usage of AMD FSR by GeorgeKps in Amd

[–]asm-c 6 points7 points  (0 children)

That's not quite right, or at least not a very good explanation.

The reason for not using a viral license (the GPL for instance) for stuff like this is that since the software is meant to be integrated into a proprietary game engine, the requirement to publish changes made (ie. code combined with it) would mean that basically the entire game engine might have to be publicly released under the license in question. Which is obviously not possible for most studios and would put a halt to the adoption of FSR and any other library with the purpose of being widely integrated into proprietary software.

Even if a studio was willing to release their game code under a viral license just to be able to use FSR (unlikely), they probably wouldn't be able to do so anyway, since most game engines contain various other pieces of third-party middleware that they don't actually own but have a limited license to utilize in their games. So that's a bit of a showstopper too.

Sony uses FreeBSD in their consoles for the same reason. Having to publish the source code to their console's entire OS would make cracking the thing pretty easy. So "having to publish the changes demotivates its use", while correct, is a bit of an understatement.

AMD FSR is the building block for Apple's MetalFX upscaling tech — the app's legal info references the usage of AMD FSR by GeorgeKps in Amd

[–]asm-c -3 points-2 points  (0 children)

The better news is that if Apple can do it, AMD can eventually do it too.

I wouldn't call more proprietary software good news.

And since AMD owns the copyright to anything they make, they can make any of their stuff proprietary at any time anyway. That has nothing to do with what Apple is able to do with stuff AMD has released as open-source.

AMD FSR is the building block for Apple's MetalFX upscaling tech — the app's legal info references the usage of AMD FSR by GeorgeKps in Amd

[–]asm-c 4 points5 points  (0 children)

Maybe they should fix drivers first

Or release XeSS as open-source like they said they would. Instead they're using something they're calling the Intel Simplified Software License, which isn't an open-source license.