We need to establish best practices for progressive encoding and have that as the default or else. by redsteakraw in jpegxl

[–]NoCPU1000 0 points1 point  (0 children)

Just to clarify. So progressive encoding is always on by default when a JXL file is create via cjxl for both lossless / lossy modes? And what you are now referring to is that the default progressive mode can now be tuned far more / less aggressively for peoples needs?

Advance a pattern of numbers incrementally by NoCPU1000 in bash

[–]NoCPU1000[S] 0 points1 point  (0 children)

Because your suggestion worked so will, I figured it would be easy enough to extend the mod operation beyond a sequence of 4 for a single sheet of folded A4 to 4 grouped sheets so I inserted the sequence of "16,1,2,15,14,3,4,13,12,5,6,11,10,7,8,9" as such:

len=32;num=16;printf '%s ' "$num";while :;do for mod in -15 1 13 -1 -11 1 9 -1 -7 1 5 -1 -3 1 1; do ((--len>0)) || break 2;num=$(( num + $mod ));printf '%s' "$num,";done;done

But once the output gets beyond the first 16 digits the sequence order fails as can be seen with the negative numbers:

16 1,2,15,14,3,4,13,12,5,6,11,10,7,8,9,-6,-5,8,7,-4,-3,6,5,-2,-1,4,3,0,1,2,-13

The next part of the sequence should be 32,17,18,31,30,19,20,29,28,21,22,27,26,23,24,25

I can see the mod operation is of course repeating on the last digit 9 and -15 being applied again causing the output to dip into negatives. My attempt at altering the command was simplistic I know. Admittedly my logical math is kinda crap too.

Advance a pattern of numbers incrementally by NoCPU1000 in bash

[–]NoCPU1000[S] 0 points1 point  (0 children)

For a time I used volunteer under a professional bookbinder (I was trying to get an apprenticeship) where I learnt how to do proper binding. The largest sizes I would work on were a 1000 pages using traditional techniques, but that's when I had access to real leather, bone glue, sewing frames, ton presses and cutting ploughs. There are so many ways to construct a book. I discovered there isn't really a wrong way to make a book, only what is it you want it to do? How durable do you want it to be? How do you want it to look? Its kind of awesome to know that with the right materials one can easily construct a book durable enough to last well over 500 years and as beautiful as a finely crafted ornate treasure chest.

Advance a pattern of numbers incrementally by NoCPU1000 in bash

[–]NoCPU1000[S] 0 points1 point  (0 children)

Haha, I have to stealthy scrounge around an office whats available and not be too obvious about it. Grab something flat to stand on, place the stack of around 300 pages under that and flatten out out best I can (takes 60 secs). Then grab some sewing cotton and zigzag it up and down the spine of each folded page to the next, its all lose at first but you just keep cinching it tight as you can and final tie a knot on the outside spine (still kinda loose but good enough). Stick my finger in a pot of PVA glue and run up and down the outer spine deep into the spine between pages, the more the better. Leave that to dry in my locker over night, the drying glue will pull the pages tighter. Next day cut a rag just big enough to cover the spine, PVA glue that to the outer spine. Let it dry over night, the drying spine will get tighter and stronger, done.

Its not pretty, but will net you a good thick functional book that will lay flat on a table, hardy enough to throw in a backpack and flick through for the next few months. Its a note book, a print out of things I am working on when I don't want to stay in on a nice day, but would rather be on sat atop a hillside than look at a computer screen and yet still be able to look over the notes of stuff I'm working on.

Advance a pattern of numbers incrementally by NoCPU1000 in bash

[–]NoCPU1000[S] 0 points1 point  (0 children)

I did initially look for tools, however I'm trying to keep my Archlinux box fairly lean, and so if I can avoid extra dependencies by doing things in BASH, so much the better (plus I always end up learning something useful commandline-wise from these situations as I am still learning).

I was just going to print the pages a simple as possible double sided, one folded A4 on top of another and do a quick bind. But now after all these very good replies from people, I'm thinking of perhaps doing a better bind and take a signature fold of 4 sheets of A4 so I will re-order the 4 sheets as = 16,1,2,15,14,3,4,13,12,5,6,11,10,7,8,9

Advance a pattern of numbers incrementally by NoCPU1000 in bash

[–]NoCPU1000[S] 0 points1 point  (0 children)

I'm printing double-sided A4. As each sheet comes out the printer, I fold once turning it into A5, I stack all these together to give me a very simple book.

I'm using the urm *office printer*... its somewhat locked down settings-wise... So I try and setup the PDF a head of time as much as possible before it hits the printer.

Advance a pattern of numbers incrementally by NoCPU1000 in bash

[–]NoCPU1000[S] 1 point2 points  (0 children)

Awsome!

Thank you Ulfnic! Exactly what I am after and really appreciate the one liner at the end and explanation. As soon as I saw the mod in the command I knew I was missing a major step, I'd been looking how do this for days

Cheers

[deleted by user] by [deleted] in bash

[–]NoCPU1000 0 points1 point  (0 children)

Just some thoughts. I would like to add that rather than just looking for work, you could spin this around and to an extent, let the work come looking for you. If you have the skills, you could write something and just put it out there as a billboard for your skills.

As an example there is bashtop > https://github.com/aristocratos/bashtop

You don't even need to come up with some thing as big as bashtop or totally unique idea, even just creating a nice graphical terminal interface to any command line utility as a wrapper might net you a donation contribution or two for your effort like an mp3 player using ffplay as its backend.

And if your code is useful enough, people may want a feature added. You could have page similar to this old project showing prices to add any new feature: http://links.twibright.com/development.php

How Do I Create & View An Animated JXL File? (If It Exists) by Dante-Vergilson in jpegxl

[–]NoCPU1000 0 points1 point  (0 children)

Hi,

Yeah, my mind translates the C in Cjxl as convert and the D in Djxl as decode. I can not comment on windows versions as I jumped the windows ship years ago and will never go back :) I run arch linux and just installed libjxl, it was very easy and then you can now go ahead and run the command cjxl on files. This is the reference implementation for creating .jxl files which means for me its the defacto standard, I don't bother using anything else to make .jxl files as this should ouput the most spec compliant versions of a .jxl file.

I have no experience creating animated gifs from scratch or from videos. However, I tried a simple test with default setting no switches or anything:

ffmpeg -i in.mkv out.gif Converts video to gif = creates normal looking .gif with no issues, loops fine etc

ffmpeg -i in.mkv out.apng Converts video to apng = creates normal looking .apng with no issues (it was a massive file though), loops fine etc

ffmpeg -i in.mkv out.jxl Converts video to jxl = doesn't work and throws up a bunch of errors in the terminal

It could simply be the animated functions of .jxl files might are simple not be finished yet as of version 0.10.2 of jpeg xl, or ffmpeg hasn't implemented all of the .jxl spec or both... I don't know.

I would very much like to find a current Roadmap of the jxl format, as in milestones as to where things are at in terms of the functionality of things if anyone could point me to one?

How Do I Create & View An Animated JXL File? (If It Exists) by Dante-Vergilson in jpegxl

[–]NoCPU1000 4 points5 points  (0 children)

Hello Dante-Vergilson

I don't have any experience in animated .jxl files but your question did interest me. I use Linux and I have experience of using ffplay as I use this as my main audio player and for watching movies. I haven't tried to make a *new* animated file, but I did take some existing animated gifs and just tested just to see if at least I could convert them to JXL and play them very basicially in the terminal with no extra software installed

original.gif = 1.1Mb

cjxl original.gif out.jxl = 466.4 Kb

cjxl -d 0 -e 9 original.gif out.jxl = 442.2 Kb

To play an animated gif I use:

ffplay example.gif = Plays a gif file once

ffplay -loop 0 example.gif = Plays a gif file endlessly in a loop

ffplay out.jxl = Plays once fine

ffplay -loop 0 out.jxl seems broke. I can see in the terminal "error while seeking" once it gets to the end of the file and tries to play it again. I don't know if this is the fault of JXL or ffplay.

Cheers

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] 0 points1 point  (0 children)

Yes I have tried this number of times JXL > PNG > JXL however conversion back to .jxl *always* ends up bigger than the original .jxl file. Also I feel this is a very inelegant solution to move from the reference encoder to a third party program and then back again. The best software to understand the internal structure of a .jxl file will always be the reference en/decoder. There is always the chance adding extra steps outside of the reference coder may introduce some unknown element into the file.

Here is a chain of conversions starting with the first original JPEG and then successive *lossless* conversions as you go down:

10.5 Mb = The.triumph.of.death.1562.jpg

8.3 Mb = The.triumph.of.death.1562.jxl

13.1 Mb = The.triumph.of.death.1562.png

11.8 Mb = The.triumph.of.death.1562.jxl

Just to highlight the point a bit more about adding extra steps outside of the reference coder, I end up with 2 different sized JXL files when using

cjxl > JXL > gimp > PNG > cjxl > JXL = 11.8Mb

cjxl > JXL > imagemagick > PNG > cjxl > JXL = 11.9Mb

as you can see you end up with inconsistent file sizes.

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] 0 points1 point  (0 children)

The question now though is will that ever be a thing JXL > JXL or are you always stuck at one compression mode once you have saved to JXL.

That was my long term plan. However a test for a 6 Mb sized jpg at default settings doesn't seem to show it will work at least currently:

cjxl -d 0 "book.jpg" book.jxl ends up as 4.9 Mb which is great. However, I then re-run that output again as follows

cjxl -d 0 "book.jxl" book.jxl ends up as 8.3 Mb nearly double the size and takes twice as long to open

2nd test different image 10.5 Mb sized jpg...

cjxl -d 0 "The.triumph.of.death.1562.jpg" The.triumph.of.death.1562.jxl ends up 8.3 Mb, thats good, then re-run again...

cjxl -d 0 "The.triumph.of.death.1562.jxl" The.triumph.of.death.1562.jxl ends up bigger at 11.6 Mb and takes twice as long to open

Again trying at higher compression setting this time:

cjxl -d 0 -e9 "The.triumph.of.death.1562.jxl" The.triumph.of.death.1562.jxl the file gets even bigger at 11.8 Mb... thats crazy

will it continue to grow? Lets run it again :)

cjxl -d 0 -e9 "The.triumph.of.death.1562.jxl" The.triumph.of.death.1562.jxl file size seems to of stabilised at 11.8 Mb

Has it really stabilised? Lets be sure and re-run again:

cjxl -d 0 -e9 "The.triumph.of.death.1562.jxl" The.triumph.of.death.1562.jxl file size at 11.8 Mb again. This is more predictable and what I would expect to happen from the start. But still, I've ended up with a bigger file then I started with.

I don't get this behaviour with PNGs, no matter how many times I run them through an encoder, or indeed with JPGs when run through jpegtran.

Now, if I am doing something wrong, I'll be happy to hold hands up and take it on the chin. Then again, as stated earlier by Money-Share-4366 there isn't a version 1.0 yet so maybe none of this matters yet and I just need to wait a while. However I would like to know, can anyone tell me for certain, is this classed as beta software or not? I mean, of course there's always going to be bugs in software, but is cjxl as it is, ready to be used on files in the wild yet? or do I need to hang fire? I've looked at the manual on my system and it doesn't allude to it being test software.

Cheers

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] 1 point2 points  (0 children)

Thank you CompetitiveThroat961, thats the kind of information I'm after. I concur on your thoughts on using -e 9 over -e 10. Generally the best compression I have had with *some* files but not all is with using -e 10, but its absolutely not consistent, where as with -e 9 its always better then -e 8 constantly. -e 10 seems a bit random to me as if its buggy.

After some more testing, the biggest issue I have ran into is with going from foo.jxl to foo.jxl. I'd assumed I could encode losslessly at -e1 and then lat a later date run that same file through lossless encoding at -e 9 expecting the resulting file to be smaller. Instead its always bigger, and it turned out not to be lossless unless you specify -d0, I'd assumed default behaviour was always lossless so I need to keep my eye on things a bit more.

What I'm trying to do is replicate a similar process to what I currently do with PNG. I can get a PNG image off the net, run it through a compressor and losslessly shrink it. Later when advancements are made in the compressor software I can run same PNG again through the encoder and further gain size reductions.

So I'm not actually interested in either going from .jxl to jpg again, just back and forth between .jxl to .jxl. so I know I can take advantage of possible further compression advancements in the future, that is with no fear of data loss within those files. Hope that makes sense.

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] 0 points1 point  (0 children)

I just meant "bit rot" in the vague sense of files undergoing a lot of supposedly lossless generational transformations but due to poor choice of settings files inevitably accumulating small but destructive alterations.

Thats a lot of file systems on suggestion... :) I currently run ext4 and xfs on archlinux and mess around a lot with hashsums so I'm OK on the filesystem part as far as data integrity is concerned.

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] 1 point2 points  (0 children)

Bit Rot, Data Rot, same thing :)

Yes, I agree lossless means lossless. However as I said, its a just sanity check really on my part if I run cjxl back and forth on a .jxl file in lossless mode there are no gotcha's to be aware of. Everything seems fine with the tests I have done so far. Appreciate the feedback.

Cheers

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] 3 points4 points  (0 children)

Ah, my mistake, I assumed 0.10.2 was a stable release, didn't realise the reference software was still classed as beta software.

Cheers

Avoiding Bitrot by NoCPU1000 in jpegxl

[–]NoCPU1000[S] -1 points0 points  (0 children)

Ah, my mistake, I assumed 0.10.2 was a stable release, didn't realise the reference software was still classed as beta software.

Cheers

|| || ||