ITK/Release 4 Planning

From KitwarePublic
< ITK
Revision as of 14:22, 7 July 2010 by Danmueller (talk | contribs) (Fixed formatting error in Architecture and Software engineering section)
Jump to navigationJump to search

ITK Version 4

This work is supported by ARRA funding from the NLM. The kick-off meeting for this project took place from June 28-July 2 2010 in Bethesda. A beta version of the software will be available by the end of March 2011. Bug fixes will continue to be contributed to the ITK version 3 code.

  • Move from cvs to git for distributed source code management
  • Remove support for the following compilers that have known incompatibilities with c++03 (http://en.wikipedia.org/wiki/C%2B%2B03#Language_standard)
    • Visual Studio version 6
    • Visual Studio version 7
    • Borland version 5.5
    • Sun Studio compilers
    • IRIX compilers
    • MWORKS compilers
    • gcc 2.96 compilers (and perhaps some very early 3.0 series compilers)
  • Improved ITK Wrapping
  • Addition of simple ITK layer
  • Enhanced modularity
  • Refactoring itk::FEM framework - V4
  • Improved DICOM support
  • Add streaming for video
  • Enhanced image registration framework

Wish List

The wish list is provided by members of the ITK development community. These requests are not necessarily included in the NLM-funded ITKv4 and ITKv4 A2D2 contracts.

Oriented Images

  • Support ND image in N+1 dimension
    • 2D image can have an origin specified in 3D, thus a series of 2D images is not always Z-aligned
    • Support ND images in M dimensions where M > N.
  • All images are oriented - remove concept of an un-oriented image
  • Check use of orientation throughout ITK
  • Support re-orientation of ND oriented images
    • Using anything other than 3D images won't compile with itkOrientedImageFilter

Image Representation

  • Allow the use of strides that are not equal to the image width
    • Would ease the collaboration of ITK with opencv
    • Would allow the use of sse operations
    • Might be considered redundant with correct use of image regions but is not since GetLargestPossibleRegion should correspond to the image width and not its stride
  • Drop the itk::Image::GetBufferPointer() method
    • This method has been many time described as a problem to implement new image layouts.
    • As expressed above, we need however to be able to use the memory held by ITK images within other libraries. This could potentially be done by making itk::Image be only a base class that has no knowledge of the memory layout and by implementing different image subclasses.
  • Consider replacing ImportImageContainer by std::vector or using std::vector to implement it
    • This would give STL iterators that operate on the whole image literally for free and make it easy to use a lot of algorithms implemented in STL and BOOST
    • Boost gil also offers a compelling alternative for memory management of images. Unfortunately it seems to be still focused on 2D
    • Lorensen: ITK images are n-dimensional. The current iterator design enables that required functionality. If I recall, stl iterators were considered but did not meet the n-d requirements.
  • See Alternative Memory Models for ITK Images on the Insight Journal for an initial implementation of such ideas
  • Discuss a proper way of handling dynamic images (2D+t is not really 3D and 3D+t is difficult in terms of memory management)

Statistics

  • Complete statistics refactoring (see NAMIC sandbox)

FEM Meshes

Backward compatibility and cleanup

  • Clean-up CMake Vars ==
  • Remove Deprecated Features
    • Functions that have been deprecated (and appropriately marked as such) for more than 3 releases should be removed.
  • Modify the itkSetMacro to use a const reference argument, i.e. #define itkSetMacro(name,type) virtual void Set##name (const type & _arg)
    • This cannot be done int ITK 3.x because of backward compatibility issues
  • Make the semantics of the ITK containers match th one from STL
  • Set the default options values to provide the highest result quality
    • Some filters have default options values to produce quick transforms rather than high quality transforms. This is the case for the distance map filters, which produced squared results and don't use image spacing by default. This behavior is desirable in some conditions, but shouldn't be the default one.
  • Supported compilers
    • We should reconsider the list of supported compilers. ITK 4.0 might be a good time to drop, for example, MSVC 6.0 that only implements a subset of modern C++.
    • I would even suggest to go so far as to pick a very small set of very recent compilers that already implement support for parts of the new, upcoming C++0x standard. Especially, auto typeing, static_assert and maybe lambda expressions should be available for writing new code.
  • Define a transition period during which developments need not be backward compatible
    • Such a period could be defined in terms of a number of "beta" releases

Image Registration

  • Set up the infrastructure to ease the implementation of modern optimization schemes for image registration
    • Requires Hessian or pseudo-Hessians of the cost function
    • Requires several types of update rules (additive, compositional, inverse compositional, etc.)
    • References: "Lucas-Kanade 20 years on" by Baker et al.; "Homography-based 2D Visual Tracking and Servoing" by Benhimane and Malis, "Groupwise Geometric and Photometric Direct Image Registration" by Bartoli; etc.
  • Allow the use of regularization terms that depends on the spatial transformation. See elastix for an example implementation.
  • Clean up the use of parameter scaling in the optimizers
    • One possibility would be that the optimizers only perform unscaled minimization. It would then be up to a cost function wrapper to do the rescaling and potentially return the opposite of the cost function. This is similar to how vnl optimizers are used in ITK
    • See also elastix for another example implementation.
  • Optimizers should return the best visited value
  • Modify transforms to support a consistent API across transform types
  • Modify order of parameters to be consistent across transforms.
  • Modify the base class for optimizers to support key optimizer API calls such as SetMaximize and SetNumberOfIterations or SetMaximumIteration
  • Make the registration framework work with vector images natively.

Composite Transform

Architecture and Software engineering

  • Implement a pure virtual base class for each API to support instantiation of templated filters at run-time with different dimensions. Many classes in ITK are templated, for example over spatial dimension and pixel type, or over images that are templated over spatial dimension and pixel type. However, many of the operations that are carried out do not depend on the spatial dimension and pixel type. A pure virtual base class for a particular filter, such as itk::ResampleImageFilter, would define the API of the ResampleImageFilter without implementing any of the functions that depend on TInputImage, TOutputImage or TInterpolatorPrecisionType. This would enable a pointer to the virtual base class to be manipulated in code, and a specialized implementation with a particular TInputImage, TOutputImage and TInterpolatePrecisionType to be instantiated at run time. This would enable an image to be read in, its dimension and pixel type to be established at run time, an appropriate specialized class to be instantiated and used, rather than the current practice of fixing at compile time the dimension and pixel type that will be utilized. For example, a single program could be written using the virtual base class API with run-time instantiation of a 2D filter for floating point pixels if the input is a 2D with floating point pixels, and a 3D filter with unsigned short pixels if the input is 3D with unsigned short pixels.

Can you explain a bit more?

  • Add interfaces to the algorithms that turn incomplete initialization into compile time error for "linear" environments or enable some kind of validation instead of throwing an exception in "dynamic" environments. In both cases, the entry points to doing real work of the algorithm should then be guarded by assertions regarding the required parameters, not exceptions - since ending up there without proper initialization would always be a programming error.
    • As a "linear" environments I define an implementations where the parameters and the input to an algorithm are completely determined by the program. In this case, an error in initialization (by missing a SetXXX method) usually is a programming error. Adding an initialization method or constructor that takes all required parameters would enable the developer to move this error from run-time to compile-time.
    • As a "dynamic" environments I imagine e.g. a GUI program, where the user can set the parameters to an algorithm dynamically. Here, a missing SetXXX is not a programming error, but a user error. However, since more than one parameter might be missing, exceptions are not a good way to report the problem. Instead, it should be possible to call some validation function that reports all the missing parameters to the user.
  • SmartPointer< T > should be implicitly convertible to SmartPointer< U > whenever T* can be implicitly converted to U*.
    • This might be achieved by using TR1 smart pointers instead of the ITK 3.0 smart pointer implementation. It might however then be more complex to use the default factory mechanism as with itkFactoryTestLib.cxx and itkObjectFactoryTest2.
  • Code Revision Control
    • Migrate to Subversion or GIT
  • Portability issues
    • Discuss the use of fixed-width types to enhance portability and interoperability. This can be done by using cstdint from boost.
    • Avoid the use of tryrun in the cmakelist and rely only on trycompile to ease cross-compilation

Internationalization

  • Allow the use of unicode file names, see this bug report

Proper resampling/consistency in IndexToPhysicalPoint, ContinuousIndexToPhysicalPoint, Point*

Deformable Organisms

Make as much filters as possible able to run in place

In place computation is a great way to avoid running out of memory when updating a pipeline. We should review all the existing filters to find the filters which could be implemented that way, and use InPlaceImageFilter has their base class. Also, a global setting to control the default in place/not in place behavior would be great.

Make the boundary conditions usage consistent across the toolkit

At the moment, some filters let the user provide a boundary condition, some don't but use one internally, and some just don't use them at all. This should be consistent in the toolkit, and if it make sense, it should be changeable by the user. Boundary conditions also make some filters hard to enhance with much more efficient algorithms - see BoxMeanImageFilter for an example.

Replace the current implementation of Marching Cubes and add a 4D version

The itkBinaryMask3DMeshSource filter currently provides the closest functionality to the Marching Cubes algorithm in ITK. However the code of this filter has to be rewritten in order to match the quality standards of the rest of the toolkit. As part of this rewrite we should provide implementations for 2D (marching squares), 3D marching cubes and a 4D version that could be used for segmenting 3D+time datasets.

Normalize the Binary/Label/Grayscale usage in code and in the class names

Proposals:Consistent_usage_of_label_and_binary_images

Use an image template parameter in the complex related filters

Arbitrary precision type

for reconstruction and geometry processing, you might want to use arbitrary precision type. Boost has one, GMP is now LGPL. That also could be a feature of the numerical library, and then the solvers could directly use this, if needed.

inspired from exct and filtered kernels in CGAL

Exact geometrical test (point in circle => delaunay

If we cannot go for arbitrary precision types, in some case it is sufficient to support some operations to have exact geometrical predicates. This is mandatory for a robust delaunay implementation. The implementation for the point-in-circle predicate which is necessary and sufficient for exact 2D delaunay, is public domain.

Note that abitrary precision would allow for any exact geometrical predicates.

3rd Party Libraries

  • Out dated libraries
    • Many 3rd party libraries (ex libTIFF) are years out of date. One possibility is to update them to their newest official release. Another is to remove them and require developers to use their own version (i.e. USE_SYSTEM_TIFF).
  • Linear algebra package
  • A fairly complete list of potential libraries can be found at [2]
  • Numerical analysis package
    • The current numerical analysis package used by ITK is VNL. It's performance and robustness is not very good, it is not actively maintained. We should therefore discuss the alternative possibilities. Below is a list of potential alternatives:
    • The main numerical analysis tools we use from vnl are the optimizers. Most of these optimizers have an alternative quasi-ITK implementation in elastix.

Coding Style

  • The current descriptive naming scheme is certainly good to get a grip on the functionality, but the length of the names are, IMHO getting a bit out of hand. I would suggest to group similar classes into namespaces, like e.g. MeanSquaresImageToImageMetric and MatchCardinalityImageToImageMetric, and the likes into ImageToImageMetric and use the specific part as new class name (MeanSquares, MatchCardinality). For those preferring the long version ImageToImageMetric::MeanSquares is at least as descriptive, and others could use the using directive in their code. These namespaces would also help with the automatically generated documentation since classes would be better grouped by having namespace related pages instead of only the flat alphabetical ordering that currently exists. For backward compatibility, one could provide defines that should, of course, be only enabled as deprecated feature.
  • Currenty, all include files are included using only the file name and adding all the sub-directories of the ITK include tree to the search path. This adds quite some overhead to the compile time, since all these directories have to be searched. As an alternative I'd suggest to include the files like <BasickFilter/itkSomeFilter.h> or even change naming to <itk/BasickFilter/SomeFilter.h> and only add the itk include base path to the search path. As a result ...
    • the preprocessor only needs to find the subdirectory and then the file therein,
    • and in addition, if someone wants to look up something in the source code without firing up an IDE that automatically does the file lookup, it is easier to locate the include file based on this additional path information.
    • To make transition easier, one could define an extra CMAKE variable that would add the old include path for a backward compatible compile and in case of the second include style, let the old itkSomeFilter.h file emit a backward compatibility warning - just like g++ has warnings about e.g. including an old style <iostream.h> instead of the new <iostream>.

Wavelets Framework

  • Wavelets are intensively used in operations such as denoising and compressing. A common framework to decompose N-dimensionnal images with wavelets would be valuable. Such a framework could include :
    • a common way of representing wavelets,
    • a common way of representing multiscale images.

Label map writer

  • A class has been created to store labelmaps in memory, considering a writer/reader couple to store this information may be valuable.

Discussion Points

  • Strings are std::strings: Filenames should be std::strings, not const char* _arg
    • Set character string. Creates member Set"name"() (e.g., SetFilename(char *)). The macro assumes that the class member (name) is declared a type std::string. */
  • Threading model modifications:
  1. Thread pools?
  2. Hierarchal thread handlers?
  3. Can threads deal with pushing data across hardware units with inconsistent memory models (i.e. 4 core SMP + GPU#1 + GPU#2)
  • Need Explicit Transition Documentation:
  1. To facilitate transition from ITK3 to ITK4, a set of migration documents and tools will be beneficial to researchers who will be migrating to their tools to ITK4. As we adapt the reference applications to the changes occurring in ITK4, we will document the necessary source code changes to use the new version of ITK4. When possible, we will write a set of scripting tools that transform ITK3 to ITK4, or identify old code that has become invalidated, and suggest alternative options. We will deliver this documentation both as a standalone document and as an online WIKI-based web resource.
  • Transforms:
  1. Need ability to deal with concatenated physical space mappings (Rigid-»Affine-»BSpline-»Displacement-»Rigid-» etc)
  2. Better protocols for reading/writing/ transforms including the ambiguous way that compound (i.e. not concatenated) transforms are represented.
  • Image Physical Space/Domain:
  1. There is a need an image layer that does NOT have a physical space representation
    1. ImageBase-»DigitalImage-»ContinuousImage-»[Image(the current interpretation)|GeoSpatialImage|Other physical space]
    2. Most PixelWise filters will be defined over the DigitalImage layer to explicitly identify them as not dealing with physical space
    3. Digital to Continuous conversion API is consistent (TransformIndexToPhysical & PhysicalToIndexTransform), but may be adaptable to address the needs of our expanding user base
    4. The current "Image" designator is explicitly defined as an (LPS anatomical, East|North|Altitude) with Direction cosigns, spacing, origin and appropriate
    5. Units: A designation that can hint to the application level how to interpret physical space. Perhaps the filters would just verify that units are the same. (default being non-existing and therefore the same).
  • Platform support (a platform is a [Hardware, OS, Compiler, Compiler options] )
  1. Reduce build burden -- Explicit Templates & Pre-compiled headers:
    1. Under the assumption that the scope of the simple & wrapping frameworks will define a large swath of instantiations that almost all applications also need. By defining and pre-compling those common (perhaps driven by the wrapping choices) tools, compiling the application layers will be significantly faster. This should reduce the burden on application developers by shifting build time costs away form the time of building the applications.
    2. We do have to be careful to limit this (or provide options to limit this) so that the default build does not take 10 hours.
    3. This does imply that the compiler MUST support explicit instantiation of templates
    4. ---- As a side note, perhaps we also need to investigate pre-compiled header support for compilers to lower the development burden
  2. An explicit deprecation cycle for ALL compilers
    1. It is CRUCIAL that we define upfront a MINIMUM time frame for how long each version of ITK4 we commit to supporting all platforms. It is expected that these dates will shift into the future, but not earlier.
      1. Windows 7, Visual Studio 2010 will be supported at least until Jan 1, 2017
      2. Windows XP, Visual Studio Express 8 will be supported at least until Jan 1, 2015
      3. Linux gcc version 4 (greater than 4.0.1) will be supported at least until Jan 1, 2012
  3. Compilers need to be compliant with the C++03 standards. http://en.wikipedia.org/wiki/C%2B%2B
  • IO interface changes:
  1. Transforms components "Can be written", but there is insufficient information to un-ambiguously derive the intent and appropriate use of those transform parameters. In particular, the parameters for BSpline are written to disk in two components, and the BSpline parameters are linearly written, but the implied grid for those parameters is not specified in the output file. As a use case, the BSpline transform type SHOULD be able to be instantiated from just the information in the file.
  2. Are there IO factory simplifications that could be done? i.e. remove separatedFactories for each type
  • Meta Data Dictionary:
  1. MetaData Dictionary items need "Traits" to help identify the "social status" of the element (such as ITK_SUPPORTED_PUBLIC, ITK_SUPPORTED_PRIVATE, UNSUPPORTED_PRIVATE, UNSUPPORTED_PUBLIC).
  2. The first contribution of the Iowa ITK development team was to introduce the “MetaDataDictionary” storage mechanism to all ITK Objects that could contain arbitrary key-value pairs of data. The MetaDataDictionary was not part of the original ITK framework definition, and has therefore not been consistently implemented throughout the ITK toolkit. We propose to revisit the implementation of the MetaDataDictionary to improve and simplify the ABI so that it is easier to use. In particular, the Encapsulate() and Expose() functions can now be simplified when using standards-compliant C++ compilers, and new features for describing a dynamically created dictionary at run time will be introduced. A common use of the MetaDataDictionary has been to temporally append features that were missing in the initial ITK framework. In many cases, the use of the MetaDataDictionary was subsequently followed by addition of formal support of the feature to ITK, and this lead to redundancy of information that was not always kept synchronized. Additionally, as new developers adopted the MetaDataDictionary for different uses, they each created inconsistent definitions for similar uses of the dictionary. We will review the internal ITK uses of the MetaDataDictionary, remove uses that are now redundant with formal accepted functionality, and consolidate similar uses of the Dictionary. The MetaDataDictionary is most commonly used to pass additional information to or from the Input/Output mechanisms. We will define a protocol so that data object histories can be tracked through consistent instrumentation of ITK process objects to tag output objects. We will review the Input/Output and develop a standard practices guideline for how to map the extra data to and from ITK in a consistent way. Wherever possible, we will use tags and definitions consistent with the dictionaries specified in the DICOM standard.
  • ITK Orientation
  1. Instrument the itk::ImageIO methods to record spatial image orientation in the MetaDataDictionary prior to it’s inclusion as a formal feature. This solution allowed representation of which of 48 possible 3D orientations the image was in. This was represented symbolically by 3 letter combinations, based on permuting (A)nterior/(Posterier), (I)nferior/(S)uperior, (L)eft/(R)ight. For example RAS meant that the voxels are oriented Right-to-Left, Anterior-to-Posterior, Superior to Inferior, corresponding to Row/Column/Slice. This method of specifying orientation is useful but very limited; many anatomical scanners allow scanning at oblique angles with respect to anatomy. So after long discussion in the ITK developer community, a more general method was developed, which represented the orientation as a direction cosine matrix (DCM). The direction cosines specify a general 3D rotation of the image voxels with respect to the anatomy being scanned. The long experience with brain imaging research at Iowa will inform our implementation of these changes. However, we also recognize and highly value the expertise of the other members of the ITK developer community, and the implementation will be guided by the consensus judgment of the community as a whole. We will enhance the code infrastructure of ITK with respect to issues of position and orientation of images. Deliverables will comprise both changes to the core ITK library code, and extensive regression tests to verify correct operation with the rest of the ITK Library. There are three ways in which the handling of orientation in ITK needs to be improved:
  2. Currently the size of the direction cosines is linked to the number of dimensions in the image – for a 2D Image, they comprise a 2x2 DCM, for a 3D Image 3x3, etc. Conceptually, the use of the current orientation description is most suited for describing images with exactly 3 spatial dimensions. We believe this needs to be changed and generalized to accommodate describing data sets within a higher dimensional space, or with respect measurement direction that is different from scanning direction. First, any 2D scan of anatomy will have an implied third axes, parallel to the direction of image acquisition. In particular, 2D DICOM slice images always report the three-dimensional image position of the scan with respect to the patient. At present, when ITK reads a single 2D DICOM image, there is no way to preserve it’s relationship to 3D-space. Finally, the orientation of a measurement frame with respect to sub-volumes is necessary to properly represent complex acquisition types (such as diffusion weighted imaging). Similarly, 2D Images can have a three-dimensional image origin with a well-defined anatomical meaning. Discarding that when reading the image is a loss of information. In both cases (orientation and origin) it will be necessary to enforce backwards compatibility. This will be achieved in two ways: by adding new class methods that are explicitly three-dimensional, and by careful testing of all ITK classes to ensure their continued correct operation.
  • Need more FFT adaptors, and a slight refactoring of FFT adaptor base to meet several platform
  1. It would be nice to have a compliant FFT built in with reasonable performance (vnl does not work)
  2. Mac Accelerate framework, Intel MLK,
  • Need adaptors for some subsets of (SVD/LAPACK/BLAS) numerics:
  1. Should be able to take advantage of the Mac Acclerate, Intel MLK, Atlas or others with minimal end user configuration
  • Remove NeuralNetwork
  1. The current implementation in ITK requires that the network architecture and Input/Output routines be completely defined at compile-time. This severely limits the utility of the current implementation. The Input/Output mechanism of the current Neural Network currently only supports a proprietary network file format. This limits the ability to analyze the networks with complex external tools such as Matlab Neural Network toolkit.
  • Coding style, and enforcement:
  1. There should be some system somewhere that is compliant with the ITK coding style. It would be preferable if the primary development editors (VS, vim, emacs) could all be configured to support the style consistently.
  2. The uncrustify program MAY allow a conduit for helping new developers format to be KWSTYLE compliant.
  • All process object outputs should be available as data objects (including native types)
  1. There are times when a filter computes a value that is used as input into later stages of the pipeline, there needs to be a way to easily link filters and have the pipeline work properly even when the linkage is a simple integer.
  • Should the default behavior of all filters be to release their input data
  1. This may address the complaint that ITK has unacceptable memory usage.
  • All class objects that end in "2" as a mechanism to fix bugs, and not break backwards compatibility should be removed.
  • itk_hashtable.h should be investigated to determine if a more standard implementation could replace the existing version. The current version has many different implementations for dealing with different compiler bases.