No subject
Fri Oct 24 12:25:09 EDT 2014
a range of physical file formats - raw image, meta image, dicom image
etc. This is very elegant.
Now in principle this means that any itk application that accesses
an image can auto-detect the format of the specified file
and automatically use the appropriate class to read it. The application
just uses a pointer to the base class of course.
To do this requires static functions that can figure out
what format a file is in (of course for raw image this would not
work as you still need the dimensionality) and this is not always
a totally trivial exercise. This all then gets packaged up
so the application programmer just calls one function and
get back a pointer which has created the correct derived class
object.
So there are three basic models for accessing images
. Transparent native access to all supported file formats
for all applications
. There is a native itk format, and foreign formats are filled or
converted to this format in an extra step. All
applications then operate on the native format.
. Each application reads a specific format only which has
to be documented.
So my question is, what model does itk follow (or perhaps another ?) ?
The reason i ask really is that i downloaded some BrainWeb
data (meta image and raw image) and tried a couple of the
example programs on it. They failed with a variety
of i/o errors or segmentation failures (urk). Really, all
i want to be able to do to start with is read an image and
print out some numbers or display it (but i haven't got vtk to build yet...)
to make sure things are working. The test programs ran ok, and the example
program MetaImageReadWrite happily did its thing.
A second question is, if there is a native itk image storage
format, is it tiled for efficiency ?
thanks
Neil
More information about the Insight-users
mailing list