[Insight-developers] ImageIO::ReadBufferAsASCII and WriteBufferAsASCII do not handle signed and unsigned chars correctly

Luis Ibanez luis.ibanez at kitware.com
Wed Feb 17 15:08:20 EST 2010


Hi Brad,

Considering that this is a bug,
I would vote for inserting the PrintType() trait as you suggest.

The trait goes only for writing, isn't it ?

I don't quite see how we could use the trait for reading.

--

An attempt to provide backward compatibility would require
the files to have a version number (and they don't have it).

Maybe looking forward, we should version-stamp the files
that ITK creates. Something like:

        "   ##   This file was created by ITK 3.16.0 "

It would have to be a message inserted as a comment in the
file formats that support comments...


   My 2 cents,


        Luis


------------------------------------------------------------------------
On Wed, Feb 17, 2010 at 10:06 AM, Bradley Lowekamp
<blowekamp at mail.nih.gov> wrote:
> Hello,
> I discovered the following bug the other week:
> http://www.itk.org/Bug/view.php?id=10124
>
> Caused by the following code:
> void WriteBuffer(std::ostream& os, const TComponent *buffer,
> ImageIOBase::SizeType num)
> {
>   const TComponent *ptr = buffer;
>   for (ImageIOBase::SizeType i=0; i < num; i++)
>     {
>     if ( !(i%6) && i ) os << "\n";
>     os << *ptr++ << " ";
>     }
> }
> }
> The solution is to use the numeric trait "PrintType" for the reading and
> writing here. The reason I believe that this bug has not been found earlier
> is that this writing and reading is symmetric. What I mean is that you can
> write a file with ASCII chars and then read it back and it will be correct
> in ITK. But the file would actually be binary chars.
> Should this bug be fixed with my proposed solution?
> I suppose if backwards compatibility of the files is an issue ( one could
> always use an older version if ITK to convert to binary or another file
> type), we would have to some how specialize the ReadBuffer<> method. Then if
> the LEGACY is defined we would need to scan a certain amount of the input,
> and then try to guess if the input was legacy binary chars or correct ascii.
> But  of course you can always have a binary file which looks like an ascii
> file, so then perhaps the size of the file could be used. But then again the
> file could have some kind of data at the end of the file. So this would
> likely be some ugly hack which tries to preserve the behavior but will
> likely cause more issues, and maintenance problems.
> Brad
>
>
> ========================================================
>
> Bradley Lowekamp
>
> Lockheed Martin Contractor for
>
> Office of High Performance Computing and Communications
>
> National Library of Medicine
>
> blowekamp at mail.nih.gov
>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Kitware offers ITK Training Courses, for more information visit:
> http://kitware.com/products/protraining.html
>
> Please keep messages on-topic and check the ITK FAQ at:
> http://www.itk.org/Wiki/ITK_FAQ
>
> Follow this link to subscribe/unsubscribe:
> http://www.itk.org/mailman/listinfo/insight-developers
>
>


More information about the Insight-developers mailing list