tif output fails for 16-bit data if orig. input file was jpg

Post any defects you find in the released or beta versions of the ImageMagick software here. Include the ImageMagick version, OS, and any command-line required to reproduce the problem. Got a patch for a bug? Post it here.
Post Reply
Dabrosny
Posts: 111
Joined: 2013-10-02T10:49:39-07:00
Authentication code: 6789
Location: New York, US

tif output fails for 16-bit data if orig. input file was jpg

Post by Dabrosny »

Code: Select all

$ convert rose.jpg  -depth 16 -motion-blur 3  rose.tif
convert: BitsPerSample 16 not allowed for JPEG. `JPEGSetupEncode' @ error/tiff.c/TIFFErrors/563.
It looks like IM wants to use jpeg compression inside the tiff output file because my input file was jpg.
It can't apply jpeg compression to 16-bit data so it fails.

Just because my input file is jpeg, why would that mean that I want to have additional lossiness after my processing when I go to save the file in a usually-lossless format like tif?

If I were simply converting a .jpg image directly to .tif, I guess it would be okay if it directly stored the original jpeg-compressed data in the tiff file (without decompressing and recompressing which introduces further losses), but if I've done some processing it's no longer the same image so why does it matter that the original image was jpeg?

I understand that I can probably explicitly tell the tiff encoder what kind of compression to use, but I just want it to default to something reasonable: something that can hold my 16-bit result, and, especially for 8-bit output, something that doesn't introduce additional losses automatically by applying jpeg-compression for the tiff output.

(I'm using 6.8.7-0 2013-09-30 Q16 (HDRI) in cygwin under win7.)
-Dabrosny [Using IM7.0.6 or higher, Q16 HDRI x64 native executable, command line, often invoked from cygwin bash/sh (as of Aug. 2017)]
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: tif output fails for 16-bit data if orig. input file was

Post by fmw42 »

Confirmed on IM 6.8.7.3 Q16 Mac OSX Snow Leopard (with and without HDRI)

As expected, the following does work


convert rose.jpg -depth 16 -motion-blur 3 -compress LZW rose.tiff
snibgo
Posts: 12159
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: tif output fails for 16-bit data if orig. input file was

Post by snibgo »

Behaviour confirmed for v6.8.7-0 under Windows7.

Likewise when the input is jpeg-in-tiff, the following fails:

Code: Select all

convert j.tiff -depth 16 j2.tiff
If this behaviour is bad, what would we like to happen? Perhaps: when "-depth 16" is encountered, if the default compression is JPEG, change it to None.
snibgo's IM pages: im.snibgo.com
User avatar
magick
Site Admin
Posts: 11064
Joined: 2003-05-31T11:32:55-07:00

Re: tif output fails for 16-bit data if orig. input file was

Post by magick »

We can reproduce the problem you posted and have a patch in ImageMagick 6.8.7-4 Beta available by sometime tomorrow. Thanks.
Dabrosny
Posts: 111
Joined: 2013-10-02T10:49:39-07:00
Authentication code: 6789
Location: New York, US

Re: tif output fails for 16-bit data if orig. input file was

Post by Dabrosny »

snibgo wrote:Behaviour confirmed for v6.8.7-0 under Windows7.

Likewise when the input is jpeg-in-tiff, the following fails:

Code: Select all

convert j.tiff -depth 16 j2.tiff
If this behaviour is bad, what would we like to happen? Perhaps: when "-depth 16" is encountered, if the default compression is JPEG, change it to None.
Bt nobody has addressed the question of why we should jpeg-compress (even an 8-bit) tiff file just because one of the input files was a jpg file.

What if there are several input files, some of which are jpg? What if many operations have been performed and the image bears very little resemblance to the original -- why would we want to use a lossy compression scheme for tiff output by default?

Isn't a basic principle not to discard data "by default" if the user didn't request a lossy operation (or format)? Especially because most users might not even be aware that they are implicitly choosing to have losses in their output because of the format of the input file. The fact that the input was jpeg does not mean that I prefer to lose additional information next time I save the file -- it often simply means it came from a camera or was obtained in that format.

Particularly because we don't know if the output is meant as final, or just an intermediate step to future processing -- it would seldom be a good idea to decompress and lossy-compress repeatedly during processing.

Anyway, if the user had wanted jpeg compression, he likely would have specified a jpeg output file.

(I'm also not sure it makes sense to use zip compression the tiff file merely because one or more of the input files were png files (which almost always use zip/deflate). But at least that's a lossless format, so no permanent harm done. All of this seems to assume that the compression scheme of the input indicates our preference for the output, which doesn't follow. The output choice should only have to do with considerations of losslessness, compatibility, storage efficiency, etc., none of which are necessarily related to the choice somebody made for one or more of the input files, especially if a lot of processing has occurred.)
Last edited by Dabrosny on 2013-10-26T23:27:28-07:00, edited 1 time in total.
-Dabrosny [Using IM7.0.6 or higher, Q16 HDRI x64 native executable, command line, often invoked from cygwin bash/sh (as of Aug. 2017)]
snibgo
Posts: 12159
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: tif output fails for 16-bit data if orig. input file was

Post by snibgo »

As a general rule, IM creates the output with the same characteristics as the input. This includes, for example, the "-quality" setting.

IM assumes users know what they are doing, so another principle is not to change settings without being told to. IM doesn't guess what the user might prefer.

True, it could be argued that where an input is jpeg or jpef-in-tiff and output is tiff, then IM should always write losslessly unless the user explicitly asks for jpeg. But that would work against IM's principles.

Personally, when I write to tiff, I always specify the compression unless I know the input is from a tiff that I have written. And I don't use jpeg unless I really have to.
snibgo's IM pages: im.snibgo.com
Dabrosny
Posts: 111
Joined: 2013-10-02T10:49:39-07:00
Authentication code: 6789
Location: New York, US

Re: tif output fails for 16-bit data if orig. input file was

Post by Dabrosny »

snibgo wrote:Personally, when I write to tiff, I always specify the compression unless I know the input is from a tiff that I have written.
I think that's good advice. And when it's good advice to always change the default behavior, that usually means that the "good advice" behavior should *be* the default.

I think it's an even more important principle to preserve the image *data* created by processing operations, rather than to discard some of it in order to preserve a *setting* internal to one of the input files.

IM is not merely a format conversion tool that attempts to preserves some characteristics of the input format in the output. Straight conversion is a vary narrow special case; if you do two dozen operations with multiple input images in different formats, what you're "preserving" isn't the original image, it's a (perhaps very) different image that' you've gone to great deal trouble to create. I think a more important principle is to not discard some of this hard-produced image data lossily in order to "preserve" a setting (rather than the image data) of perhaps only one of the input files that was read in before the two dozen operations.

Anyway, the internal compression format of one of the possibly-many input files is not a "setting" that the user has necessarily "chosen", nor is even necessarily aware of in the case of a tiff or png or miff for example that he did not create (or did not create recently). The user should not have to examine the internal coding scheme of a file he obtains, just in order to know how much of his *further* processing is going to be discarded by IM as a result of a choice made by someone else.

Also, by this logic, when the user reads in any kind of uncompressed data (e.g. MPC), IM should disable compression of all types of output file formats (or as uncompressed as possible), e.g. write an UNCOMPRESSED png for example. But again, the fact that I read in my data in some sort of raw format doesn't at all mean I want to save it to an uncompressed png file -- what is the benefit of that? Did the user read the file from an MPC file because he doesn't like png files to be compressed either? And by the perserve-the-setting principle, reading an uncompressed input format should mean that any jpeg output file should be written with a quality of at least 96 and no chroma subsampling, which is about as close as we can get to uncompressed in this format, since the principle would be to preserve the noncompression setting of (one of) the input file(s) as much as possible.
Last edited by Dabrosny on 2013-10-27T09:59:33-07:00, edited 4 times in total.
-Dabrosny [Using IM7.0.6 or higher, Q16 HDRI x64 native executable, command line, often invoked from cygwin bash/sh (as of Aug. 2017)]
User avatar
dlemstra
Posts: 1570
Joined: 2013-05-04T15:28:54-07:00
Authentication code: 6789
Contact:

Re: tif output fails for 16-bit data if orig. input file was

Post by dlemstra »

I am working on some changes to fix the problem with the compression. I agree with you that the compression of the source image should not be used in the output image. The default compression that is used normally should be used. We had had the same problem with the quality setting a while back (viewtopic.php?f=3&t=23922). You can expect a patch later this week.

EDIT: I just submitted a patch for this.
.NET + ImageMagick = Magick.NET https://github.com/dlemstra/Magick.NET, @MagickNET, Donate
Post Reply