[Insight-developers] VC++ Debug/Release float/double preci sion and Schrodinger's Cat.

Miller, James V (Research) millerjv@crd.ge.com
Mon, 15 Apr 2002 10:27:03 -0400


I am not too suprised by these differences.

Intel compilers use an internal representation for doubles (80 bits) that
is bigger than the IEEE standard (64 bits). This means all the registers
are 80 bits and numbers are converted back to IEEE standard whenever
a double moves from a register to memory (or cache).

When the code is optimized, the register usage is different than a Debug
build.  Intermediate calculations may remain in registers in an optimized 
build and therefore subsequent calculations will occur at the higher 80 bit
precision rather than the IEEE precision.

I think there may be a compiler option to force VC++ to perform all calculations
according to IEEE standards.

Jim




-----Original Message-----
From: Luis Ibanez [mailto:luis.ibanez@kitware.com]
Sent: Wednesday, April 10, 2002 10:51 PM
To: Insight Developers
Subject: [Insight-developers] VC++ Debug/Release float/double precision
and Schrodinger's Cat.



Hi,

A couple of VC++ experimental build were submitted
today and they show the following tests failing
again:

   - itkPointGeometryTest.cxx
   - itkVectorGeometryTest.cxx
   - itkCovariantVectorGeometryTest.cxx

What was special about these two builds is that they
were using the "Release" configuration (as opposed to
Debug).

Tracking down the problem we got to the following
minimal case :


#include <iostream>
int main()
{
   double dv        = 1.9;
   float  fv        = static_cast<float>( dv );
   float  diff      = static_cast<float>( dv ) - fv;
   std::cout << "Diff = " << diff << std::endl;
   return 0;
}


If you compile this code in "Release" mode the difference
printed out is "0". When the same code is compiled on
"Debug" the result is:  2.38419e-008

Curiously, in the case of the ITK tests the contrary
occurs: "Release" results in a lower precision (also
around 1e-8) while Debug is capable of computing an
actual "0".

Even worse, if we add a std::cout with one of the
components the precision changes and the actual zero
is computed !! (maybe VC++ is already implemting some
Quantum Computation techniques  :-)  ) or more likely,
it seems that the compiler make a decision on the storage
strategy depending on how the program is using the
variable in question... (just a guess).

You may reproduce this by uncommenting lines 169 and/or
line 170 in itkCovariantVectorGeometryTest.cxx.

Note that it works fine for "Debug", fails for "Release"
with the lines 169,170 commented out and passes on "Release"
if any of the lines are uncommented.


This is quite bad news considering the number of
statics_cast<> that we have all over the place in
conversion of templated types (Let's note that it is
not a problem of the templates themselves but of the
static_cast<> and the way the operator= cast implicitly
floats and doubles !!)


A search on the web lead to a somewhat related topic
which is: what is the smallest x so that x+1.0 is still
different from 1.0:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccore/html/_core_why_floating_point
_numbers_may_lose_precision.asp

These values are :

FLT_EPSILON = 1.192092896e-07F
DBL_EPSILON = 2.2204460492503131e-016.

but...
still that doesn't seems to be the case of the
examples in question.



In any case...
the tolerance for these three tests has been
reduced to 1e-7 instead of the prvious 1e-38 value.


Any ideas ?



    Luis


_______________________________________________
Insight-developers mailing list
Insight-developers@public.kitware.com
http://public.kitware.com/mailman/listinfo/insight-developers