anthony wrote:Perhaps all the constants like MagickPI should have that 'L' for double float added to the end.
(set in "magick/image-private.h")
Appending an "L" to all the pi-related #define constants will have the unfortunate side effect of making all computations in which the constant appears be done in long double precision. Not having anything (like what is currently done) has the possibly unfortunate side effect of making all computations in which the constant appears be done in double precision. However, it appears to me that IM "never" does anything "floating" at precision less than double, so in the latter case the unfortunate side effect is irrelevant. In the former, it may, at least in principle, mess up calls to double precision routines. Some languages will adjust the type of typed in constants (like 3.1415...) based on what they are used for. Not C: 3.1415...7 is a double, no matter what it's used for, if the compiler is ANSI compliant.
Though I prefer to use the single line version. No need to allocate the extra stack variable, though gcc would probably optimize it out in any case.
Carefully put together multiple line versions are actually closer to SSA (Static Single Assignment) form (what gcc will turn any code into before producing assembly). Only way to know for sure which one is best is to look at assembly (no time for that now) and do very careful bencharks. I would be really surprised if gcc was dumb enough not to put something in register which can on the basis of one line vs multiple lines. Gcc will put something in register even if it is, in principle, a stack variable (within limits).
I've had good results with multi-line in the past.
Hmmm we could use a constant for
True. But this snippet (actually, the -alpha on the next line; I should have put the "-" with the sqrt because it makes this point clearer) will be computed at compile time (gcc will optimize away constant subexpressions, so you can pretty much do whatever in constant subexpressions and there will be no impact on run time performance). I reorganized the code so as to group constant things as much as possible to exploit this; this is why the new gaussian code uses one less flop AT RUN TIME. Given that this constant probably is only in the code once, there is no point creating a macro for it.
Why did you use
Unless of course you want the compiler to optimize it to a multiply. In which case why not just directly define a 1/MagickPI constant! Such a constant may be useful in a number of other places such as convolve 1D gaussian (blur) kernel generation.
It was an "optimize to multiply" (blush).
A 1/pi constant, indeed, is likely to be used in many places. On the other hand, (1.0/MagicPI) is "constantified" at compile time, so there is no significant difference between the two. If the code was instead written
then the divide is done at run time. If a few extra bits of precision (without a divide at run time) is what you're after, you are actually better of doing this:
Code: Select all
const MagickRealType oneoverpi = 1.0/MagickPIL;
The final multiplication will then be done in MagickRealType precision. (Likewise, sin(pix)/MagickRealPIL would have the division done in MagickRealType precision, and in principle would be just a little more precise.) This being said, with ONE multiplication or division, I'm not sure there will be any difference in the result: there is no cancellation effect at play.
(I blushed because in this context this is a fairly pointless optimization.)
Now, you may say: Yeah, but now you are computing 1/pi in long double precision even if MagickRealType is double.
True, but the divide and cast to double are done at compile time. So no matter.
(The above is based on my experience and best understanding. Correct me if I'm wrong.)