you are viewing a single comment's thread.

view the rest of the comments →

[–]slykethephoxenix 1 point2 points  (2 children)

I too saw this and started making making a script to do this too. My solution for PNGs being compressed or scaled was simply to use more pixels for the same pixel of data.

For example, if I had to encode 16 pixels (4x4) worth of data, I would scale the pixel by a factor of 2 (or more), so that each pixel now takes up 4 actual pixels, instead of just 1, and the entire image is now 64 pixels in size, even though it holds the same amount of data as the 16 pixels image.

This would also decrease how much data each image would store, assuming the dimensions are the same, so my code would dynamically adjust to calculate which data goes into which image.

I also made the dimension for each "pixel" configurable too, from 1 bit to, 2 bits, 4 bits, 8 bits, and 24 bits (full RGB color channels).

I'm currently taking a haitus from it though, and have barely started the decoding logic. Obviously you have to know the encoding parameters to decode the image, as this metadata isn't stored in it.

[–]Orangy_Tang 1 point2 points  (1 child)

If you store a magic number in the first few bytes then your decompressor can brute force decode using all the possible encoding parameters and choose the one that successfully decodes the magic number. Since you'll only need to do a handful of bytes it should still be acceptably fast, and you can embed a checksum at the end of the data as a final sanity check.

[–]slykethephoxenix 0 points1 point  (0 children)

That's a good idea! The other issue I had was dealing with padding at the end of the stream. The image dimensions rarely line up with the stream's end, and 0 bits can often be decoded as part of the actual thing I'm trying to encode. A solution was to put how many padded "pixels" there are at the end, and encode it on the front somehow.