packJPG complains about non-optimal huffman table, and decompresses to a smaller image #103
dreamlayers
started this conversation in
General
Replies: 1 comment
-
jpegoptim doesnt itself otpimize the huffman tables, it leaves that to the "libjpeg" library. It would appear to be issue with mozjpeg... |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For a very few images compressed by jpegoptim, packJPG complains about a non-optimal huffman table, with "reconstruction of inefficient coding not supported". If I get past that warning, compressing with
packJPG -p
and then decompress with packJPG, the image is a bit smaller. Compressing that image withjpegoptim --all-progressive --force
would produce the original slightly larger file. This is all a totally lossless operation.The size difference is tiny. The main issue is that this compression inefficiency also causes jpegoptim to produce an image which another tool refuses to accept. This jpegoptim 1.4.4. is built with mozjpeg 3.1 and I guess it's probably a mozjpeg issue?
Here is an example image. Its SHA1 should be 7fb331d744eb7ce23175184f000f331b637257a6.
Beta Was this translation helpful? Give feedback.
All reactions