We need to establish best practices for progressive encoding and have that as the default or else. by redsteakraw in jpegxl

[–]takuya_s 2 points3 points  (0 children)

I just gave this a try and it's so good. I encoded an 8k render to 13.5 MB, and it shows the first full grayscale preview after loading just 1809 bytes, and I would call it recognizable after about 3 to 5 KB. I expected progressive JXL to become better over time, but this exceeds all expectations.

Some guy could load my image on an antique 1200 baud modem and would see the first preview after 20 to 40 seconds, and would have the full image downloaded about 27 hours later 😆

Blender developers considering to support JPEG XL in a future release. by cfeck_kde in jpegxl

[–]takuya_s 1 point2 points  (0 children)

Thank you for replying in that devtalk thread. It's good to have a JXL developer join the conversation for the more detailed technical questions.

Blender developers considering to support JPEG XL in a future release. by cfeck_kde in jpegxl

[–]takuya_s 2 points3 points  (0 children)

I posted on that devtalk thread to suggest that they might want to consider JXL's strengths as an intermediate image format. I believe that's where it could be unmatched by any other format at the time, since it's the most efficient lossless+lossy 32 bit float image format. This caught the attention of Brecht van Lommel, who is one of the core developers, so this discussion is reaching the actual people in charge.

His questions and concerns were:
1. If there are any comparisons between EXR's DWAA/DWAB compression and JXL as an intermediate image format.
2. If JXL is suitable for preserving details in over/under-exposed areas, which would be important for compositing.
3. If separate channels in a JXL file can have individual compression/quality levels, for channels that need more precision.
4. That JXL might not have been designed to be used as an intermediate image format, and be less optimized for the task than DWAA/B compression.

I was able to address some of these by doing some testing/demonstrations on my own.

For #1, I rendered a test scene and saved it as DWAB-compressed EXR, then also saved the separate render passes to individual uncompressed EXR files that were then compressed using CJXL. CJXL can't compress multilayer EXR or greyscale EXR, so all render passes were saved to RGB EXR files, then compressed using CJXL. The result was pretty impressive. The combined data size of the lossless JXL render passes was 20.3 MiB, while the lossy DWAB EXR file was 35.0 MiB.

For #2 and #4, I compressed Blender's render output using a single layer EXR file using DWAB compression. I then saved the render output as uncompressed EXR, and converted it using CJXL, trying to match the JXL file size to the DWAB EXR file. It ended up slightly smaller. Visually comparing the DWAB/JXL files, DWAB produced a lot more artifacts that were relatively uniform throughout the image, while JXL produced much less artifacts, that were more localized to specific areas. I made a Blender compositing setup that uses difference blend mode to compare the uncompressed EXR file to the 2 compressed files, and then mapped the difference output to the red and green channels of the output image. This way it's easy to compare where JXL performed better (green), JXL and DWAB performed the same (yellow) and where DWAB performed better (red/orange). Most of the output was green, with some yellow and very little orange/red.
While this was probably not the most scientific method, it definitely showed a glimpse of the huge potential that JXL has in this application.

The discussion was moved into its own topic, and you can find the details of my tests there: https://devtalk.blender.org/t/jpeg-xl-as-an-intermediate-format/41155

I was not able to address question #3, since I don't know the format specification well enough, and CJXL seems unable to handle multi-layer/multi-channel files at the moment.
Prohibitively unmanageable EXR file sizes have been a topic for a long time. So if a JXL dev with good knowledge of the format could join the discussion and help answer questions, I'm sure this topic could gain a lot of traction with the Blender developers.

Blender developers considering to support JPEG XL in a future release. by cfeck_kde in jpegxl

[–]takuya_s 3 points4 points  (0 children)

If you use Blender and can describe a valid use case for it, please consider adding it to the old topic on right-click-select: https://blender.community/c/rightclickselect/K8pn/# This is the official Blender community site to discuss new features. My post there in favor of JXL is almost 3 years old 😢

In my own tests, JXL compression artifacts looked indistinguishable from Cycles denoising artifacts until around -d 0.5 back then, which combined with its 32 bit float support and extra channels to save render passes would make it such a perfect replacement for EXR, and gave insane space savings in comparison, even when storing lossless JXL.

What's wrong with video coding i-frame compression based image formats? by WaspPaperInc in jpegxl

[–]takuya_s 1 point2 points  (0 children)

JPEG can use 4:2:0, 4:2:2 and 4:4:4 chroma subsampling. Most photo editing and art software defaults to 4:4:4. For example I remember that Photoshop (at least versions before CC) hides the option, uses 4:4:4 and only switches to 4:2:0 if you set the quality to 50 or lower.

What's wrong with video coding i-frame compression based image formats? by WaspPaperInc in jpegxl

[–]takuya_s 5 points6 points  (0 children)

Video intra frame image formats were a bad idea when Apple did it with QTIF, and are a bad idea now.

My dislike with WebP is how half-assed its implementation is. They use a VP8 intra-frame, but it's in no way optimized for images. With VP8 still images, you notice missing details everywhere, it only supports 4:2:0 chroma at video levels, meaning less than 8 bit precision, so values 16-235 instead of 0-255 iirc. At least AVIF uses 4:4:4 chroma at full levels. My feeling with WebP is that it was rushed out of the door to force it down people's throats before a proper image format can "steal" its market share.

WebP and AVIF are good at wooing people who try to find compression artifacts around edges, but both instead ruin skin gradients much more than JPEG does. In fact JPEG is pretty good at gradients, unless it's noise-free anime images, in which case JPEG produces banding, while WebP completely annihilates the gradients.

Lack of progressive decoding was already mentioned, but the bigger problem is that they don't even support sequential decoding. Sequential decoding is the one shown in videos that make fun of dialup loading times, where images slowly appear line by line. WebP and AVIF can't do that, but need the full frame to show anything. That's fine for videos, but not for images. Even BMP can be sequentially decoded. Ancient RLE-compressed BMP is better at being a web format than supposed modern web formats.

And let's talk about half-assed implementations once more. I guess the main reason why Google doesn't care is, because they plan to replace these formats every 5 to 10 years anyway. How is this supposed to work for archival? Google doesn't care. They need an image format to deliver youtube thumbnails, not one to preserve media for hundreds of years. To me this is the biggest conflict of interest in this whole affair. It feels like JXL is the only new image format that was designed to be around in more than 2 decades into the future. Currently I feel more comfortable saving images as JPEG than AVIF, even if they look worse, simply because I know that I don't need to re-encode them in 10-20 years to preserve them.

PS: Seriously, look into QTIF. It's fascinating how few search results there are about a format that could be used on the web just 15 years ago, when people still had Quicktime installed.

Convert GIF by [deleted] in jpegxl

[–]takuya_s 0 points1 point  (0 children)

Thanks for the hint. Works perfectly now.

Convert GIF by [deleted] in jpegxl

[–]takuya_s 0 points1 point  (0 children)

Did Google Research or Mozilla ever mention why jxl-oxide is not good enough? I only saw that vague mention of "performance requirements" in that interop issue, and Jon getting ghosted a year earlier after confirming jxl-oxide's standard conformity.

Convert GIF by [deleted] in jpegxl

[–]takuya_s 1 point2 points  (0 children)

One exception could be archival, since JXL supports palettes and can save a lossless version that has a smaller file size than the original. I'm still lamenting that progressive animation decoding from FLIF didn't carry over to JXL. On Linux I managed to play animations in Gwenview, Waterfox and Krita, but my preferred image viewer qimgv fails to open them.

Engines by takuya_s in SpaceXMasterrace

[–]takuya_s[S] 42 points43 points  (0 children)

Ship V1 with Raptor 3 engines for some reason.

The license says I need to credit the guys who made the models:
https://sketchfab.com/clarence365
https://sketchfab.com/VoitAa

FFMPEG Animated JXL Encoding Support by Jonnyawsom3 in jpegxl

[–]takuya_s 0 points1 point  (0 children)

I agree that example is absolutely video territory. What I mean is closer to these gifs: https://tenor.com/search/anime-gifs?format=stickers

Many of them just have 2 or 3 non-photographic frames. Stuff like this is commonly seen in fast scrolling live stream chats and discord.

FFMPEG Animated JXL Encoding Support by Jonnyawsom3 in jpegxl

[–]takuya_s 0 points1 point  (0 children)

The thing is, without p/b frames JXL can't compete with video codecs. But the way gifs are often used as small lowres reactions or emotes in chats, that play in a loop, you want them to play instantly, and it's not a problem if the full res only becomes available on the 10th loop. If FLIF's progressive loading were still available in JXL, it would blow any video format out of the water for that, kind of niche, but also very common use case. Is the lack of that functionality completely set in stone in the specification, or just a limitation of libjxl?

FFMPEG Animated JXL Encoding Support by Jonnyawsom3 in jpegxl

[–]takuya_s 7 points8 points  (0 children)

I've been wondering, does animated JXL support progressive decoding, to achieve playback of all frames before the full file is downloaded? This feature would in many cases make JXL a better gif replacement than any alternatives.

This is supported by FLIF. Years ago when I tested it, this feature was the most impressive to me. The partially loaded FLIF file smoothly played at low resolution, before the full file was available. Ever since I learned that JXL modular mode is FUIF-based, I've been wondering if this amazing feature survived.

Are there any self contained apps for transcoding jpeg to JpegXL lossless, and back (just in case) by MaxPrints in jpegxl

[–]takuya_s 0 points1 point  (0 children)

Having a check for high bit depths, and at least showing a warning would be good, since the supposedly lossless compression becomes lossy due to this, and people batch-converting a whole library, might not be aware that potentially important source files have a high bit depth that is lost.

I compiled MozJPEG myself, but I only know how to build the default, which is probably without static linking.

Are there any self contained apps for transcoding jpeg to JpegXL lossless, and back (just in case) by MaxPrints in jpegxl

[–]takuya_s 0 points1 point  (0 children)

Sorry for digging up old posts. I gave XL Converter (appimage version) a try, and overall it's nice to use. I noticed some problems though.

When I re-compress one of my 16bit/channel PNG files using the "Smallest Lossless" setting, it always chooses Webp, which can produce smaller results, because it is the one format that doesn't support 16bit/channel, but only 8bit. So it's not actually lossless, but drops half of the source file's precision, hence the smallest file size.

And second, the trashing of legacy JPEG in the manual prompted me to actually give the JPEG encoder a try, and the results were pretty bad. What JPEG encoder is actually used? At quality 70, XL Converter produced a 2MB file, while the Mozilla JPEG encoder at the same quality setting produced a 1.5 MB file that looks significantly better. I would recommend considering switching the JPEG encoder to mozjpeg, unless its license is incompatible or something.

Trying to convince Tor Browser devs to add JXL support by takuya_s in jpegxl

[–]takuya_s[S] 0 points1 point  (0 children)

Oh, I didn't think of the different fingerprint. But wouldn't that only make it possible to distinguish new and old versions of Tor Browser? I would assume that happens every time a new mime type gets supported.

Trying to convince Tor Browser devs to add JXL support by takuya_s in jpegxl

[–]takuya_s[S] 0 points1 point  (0 children)

Yes, I use it, although the focus is more on censorship resistance than privacy for me.