anthony wrote:Not my meaning. By 'random set of colors' I mean a set of colors that was selected by some quantization algorithm to best represent the image. that is provide more colors to 'shades' and less colors to the 'single odd-colored pixels'.
But the final set of colors to use is essentially not 'ordered' or 'evenly distributed.
Could this be achieved by doing a error-diffusion on a "copy" image, setting the palette of that copy image to the real image, and doing the ordered dither? Actually, I'm not sure if that would work, as the palette can't just be "changed" without the OD at the same time. Can the palette be somehow "passed" to the OD function?
anthony wrote:The ordered-dither in Image magick currently can only use a uniform distribution of colors, where really it should be using a dither or two or three 'close' colors (like error-diffusion dithers) to better color the picture, but without 'error-noise'.
Hmm, is this a threshold map problem or an ordered-dither problem? Does having "close colors" defeat the purpose of a "dispersed" dither?
Speaking of threshold maps, I wrote a smallish C-based map generator for ALL sizes of dispersed dither, including proper "void-and-cluster" dispersed maps for oddly shaped (or oddly powered) maps. I'm running some filesize vs. color tests on a bunch of various sizes right now to see if I see any that produce "bang-for-my-buck" on both. Got powers of 2 up to 32x32, including all sizes of P2 rectangles, and two "widescreen" versions (16x9 and 32x18). The quality differences on Lenna seem to be pretty slight, but I'm hoping they might pay off in GIF compression.
anthony wrote:The real problem with doing it 'by level' is that the levels divide up the channels in the image, but GIF counts unique colors, which are very different things. It is just a odd (and not very good) quantization method. Not all the colors creating by 'using a uniform colormap' gets used by the image, so you add more levels to generate more colors.
Yeah, it's a very long process having a Perl script run through these different combinations to get a full color palette. At least with LAB, I can focus on L and just a 11-to-13 spread of AB. What kind of time commitment would it be for you to implement a better quantization method for OD?
anthony wrote:One improvement would be to do all this with gamma correction. That is map the image to a 'linear' color map, do the quantization-dither then map back. You may even be able to get better results as you will get more 'bright' color levels where you need them. Even Joel Yliluoma's algorithm does this aspect.
You mean LAB? :)
anthony wrote:Interesting idea.
"It's a wonderful idea......... but it doesn't work."
anthony wrote:I see no reason why you can not do your ordered dither quantization in LAB color space.
Just convert the image into LAB colorspace, apply the quantization/dither and convert back.
Yep, that's what I'm doing currently, and the dither is just plain better than doing it in RGB space. Furthermore, colors can be subtracted by taking that LAB space and going into sRGB space. I've noticed that since resizing an image (smaller) increases the number of colors by ten billion, I've had to trim down the colors prior to the OD operation. Otherwise, the dither looks like crap, as it's too much to go from 9 million colors to a OD of ~256 colors.
This is why I put in that bug report about the --colors option. The standard color quantizer just can't seem to decide on the right ones if it's presented with huge limits. Yet, it can lossy switch between colorspaces pretty easy. So, I end up with a mile-long command line like this:
Code: Select all
convert anim.pam -colorspace YIQ -resize 300x169 miff:- | convert miff:- -depth 7 miff:- | convert miff:- -colorspace sRGB miff:- | convert miff:- -depth 32 -colorspace Lab -format "%o[%p/%n] %m %G %g %z-bit %r %kc -identify miff:anim.miff
(ordered dither command would follow)
Also, ImageMagick seems to "hang on" to as much detail about the colors as possible to prevent information/color loss, instead of just following what I say in the order I say it. Hence, I have to do this pipe operation. If combined, it would simply convert to 32-bit Lab and resize everything, ignoring everything else.
Regardless, LAB is a wonderful, wonderful thing. You should broadcast it more. It should be used in every quantization, since every image is going to be quantized to look good by human eyes. And for that matter, the LAB->sRGB conversion saves on color space, since a grand majority of images are only going to be shown on a sRGB capable output device. It would be awesome if there was a "LAB times sRGB" space that only covers the sRGB colors, but still has the mathematical scale to human perception.