[Insight-developers] Tolerance on Rigid2DTransform

Miller, James V (Research) millerjv at crd.ge.com
Tue, 23 Mar 2004 13:55:07 -0500


In general, the precision is needed to ensure all the nice 
features of rigid transforms. It is very easy for precision 
effects to knock a matrix out of orthogonality.  I am sure that
you are well aware of the numerical sensitivity issues with eigenvector
solvers. I see the added precision in a transform as a necessity.

I do not see the "big" deal in writing additional precision. (Ignoring
for the moment as to whether data files "should" be hand editable.) 
We would not be forcing the user that is hand crafting a MetaIO 
datafile to type in 12 digits of precision.  The user can type 
in whatever they like. The same code will read a field with 3 digits 
of precision as 12 digits of precision.  I just think that if ITK 
writes a MetaIO transform, it should use enough precision that rigid 
transforms don't get knock out of orthogonality due to precision 
problems. As for the space issue, this doesn't sound like much of 
an issue.  How many transforms are being written to files?  I can't
imagine that millions of transforms are being written. 

One option is to allow the user to set the precision on the MetaIO
object.  This allows people to craft their datafiles as they see fit.
If people are indeed writing out millions of transforms, maybe a binary
format is needed.  The binary format will use much less storage than
the ASCII format.  Another alternative is support gzipping the ASCII format
(where the ASCII format has increased precision).

<Soapbox>
Over the last 15 years I have seen many people jump through hurdles to
"get back" the precision they lost when outputing to an ASCII file 
format. Stereolithography (STL) files were predominantly ASCII and 
required a fair amount of code on import to "merge" coincident 
triangle vertices.  I have seen many computational geometry routines
fail due to the loss of precision in ASCII formats. There is the shape 
inspection system I mentioned in a previous message.  Now we have people 
using XML (which does not support binary data well) to represent precise 
geometry.  All of these problems could have been addressed with added
precision
(and binary formats).
</Soapbox>


-----Original Message-----
From: Stephen R. Aylward [mailto:aylward at unc.edu]
Sent: Tuesday, March 23, 2004 9:48 AM
To: Miller, James V (Research)
Cc: Julien Jomier (E-mail); insight-developers at public.kitware.com
Subject: Re: [Insight-developers] Tolerance on Rigid2DTransform


I agree with improving the precision of metaIO as an option.

My only concern is that we are addressing a very special case.   Most of 
our computations don't need 12 digits (it is rare to have any analysis 
produce 12 significant digits), using binary makes the file uneditable, 
and always writing 12 digits is misleading and a waste of space most of 
the time (to write 12 digits implies 12 significant digits, and 
generally we don't write PI :) ).

Perhaps we could add a member function to the group/transform writer to 
specify the number of digits to write for a transform?   That way it is 
burried in the one subclass that might need it.   Or am I being naive 
about how often (where and when) we need this amount of precision?

Thanks,
Stephen

Miller, James V (Research) wrote:
> Julien,
>  
> The CVS log for Rigid2DTransform says that you adjusted the tolerance 
> for the test for orthogonality because a transform written with MetaIO 
> was loosing too much precision on write (or read) operation that it was 
> no longer orthogonal when re-imported.
>  
> Instead of changing the orthogonality test, can you change the MetaIO to 
> write the transform with greater precision?  The best option would be to 
> write the transform out in binary (no loss of precision).  If the format 
> needs to be ASCII, you can write a greater number of decimal places 
> (something extreme like 12 or 20).
>  
> I have been burnt in the past in a shape inspection project by a 
> transform loosing precision when written to disk.  I was trying to 
> measure deviations to under 0.001 inches.  The loss in precision which 
> caused the transforms to no longer be orthogonal meant that portions of 
> my geometry were being warped (non-rigidly) to be outside my 
> tolerances.  Eventually, I tracked down the problem to the calibration 
> system storing the transforms in ASCII (single precision). Due to the 
> legacy of the application, I couldn't make the switch to a binary format 
> for the transform.  But increasing the number of decimal places in the 
> ASCII formats gave the application just enough precision room to perform 
> properly.
>  
>  
> 
> *Jim Miller*
> */_____________________________________/*
> /Visualization & Computer Vision//
> /GE Research/
> /Bldg. KW, Room C218B/
> /P.O. Box 8, Schenectady NY 12301/
> 
> //_millerjv at research.ge.com <mailto:millerjv at research.ge.com>_/
> 
> /_james.miller at research.ge.com_/
> /(518) 387-4005, Dial Comm: 8*833-4005, /
> /Cell: (518) 505-7065, Fax: (518) 387-6981/
> 
>  

-- 
===========================================================
Dr. Stephen R. Aylward
Associate Professor of Radiology
Adjunct Associate Professor of Computer Science and Surgery
http://caddlab.rad.unc.edu
aylward at unc.edu
(919) 966-9695