VTK/Image Rendering Classes
The display of slices of 3D images in VTK is currently much more difficult and much less flexible than it should be. A typical pipeline for displaying an oblique reformat of an image will consist of vtkImageReslice, vtkImageMapToColors, and vtkImageActor, and most VTK novices (in fact even most experts) will have great difficulty sorting through the many settings of these filters to achieve their desired result.
The primary goal of this project is to provide a 3D image mapper that will take care of all the details so that VTK users can display reformats with ease. In order for this to be done, the vtkImageActor must be replaced with a new image prop class that has SetMapper() and SetProperty() methods like vtkActor and vtkVolume.
I propose a new prop class called vtkImage that will be the image-display equivalent of vtkActor and vtkVolume. This class will have an associated property class and a hierarchy of mapper classes.
- vtkImageActor (subclass)
- vtkImageResliceMapper (subclass)
- vtkImageSliceMapper (subclass)
In addition to the new classes, the vtkInteractorStyleImage will be modified so that it has a "3D mode" for interacting with 3D images.
Unlike vtkImageActor, the vtkImage class will have a very simple interface. In addition to the vtkProp3D methods for controlling position, orientation, and visibility it will have a SetMapper() method and a SetProperty() method.
The existing vtkLODProp3D class will be modified so that it can make an LOD from a vtkImageProperty and vtkImageMapper3D. This will allow vtkImage to be part of an LOD, which was impossible with vtkImageActor. In addition, the VTK pickers will be modified to use this class in place of vtkImageActor, which will still be supported since it is a subclass.
By using alpha-blending (translucency), different images can be blended together at render time. They will be blended in the order in which they were added to the renderer. Potentially, each image could be assigned a layer number.
The property will control the image display parameters.
- SetInterpolationType(int type)
- SetScalarRange(double range)
- SetLookupTable(vtkScalarsToColors *table)
- UseLookupTableScalarRangeOn() - default Off
- SetOpacity(double opacity)
- ShadeOff() - default On
Interpolation types will be Nearest, Linear, and Cubic. There will be no methods for setting window/level, only a method for setting the range.
The lookup table is optional. If no lookup table is given, then the range will still be applied: single-component data will be displayed as greyscale, and multi-component data will be displayed as color. If a lookup table is provided, the VectorMode of the lookup table can be used to control how multi-component data will be displayed.
The Shade sets whether lighting has any effect. Potentially parameters could also be provided for Ambient, Diffuse, Specular, and Color but I'm not sure how useful they would be.
This is the base class for 3D image mappers. It has SetInput() and SetInputConnection() methods, and inherits abstract mapper methods for setting clipping planes.
A mapper for oblique reformats. The default behaviour of this mapper is to follow the camera, i.e. to always display the slice that intersects the camera focal point and is perpendicular to the view plane normal. Having the slice follow the camera makes it very easy to modify VTK interactor styles to work with this mapper.
The interface methods are as follows:
- UseFocalPointAsSlicePointOff() - default is On
- UseViewPlaneNormalAsSliceNormal() - default in On
- SetSlicePoint(double point)
- SetSliceNormal(double normal)
Internally, this mapper uses vtkImageReslice to reslice the image and create a 2D texture that is as large as the portion of the viewport covered by the image. This texture is then composited into the scene.
A mapper that can only do x, y, or z slices. It will also be the ideal mapper for displaying 2D images, since it will directly map its input to a texture and will therefore be more efficient than the reslice mapper.
This interface is intentionally similar to that of vtkImageActor. Once this mapper is finished, vtkImageActor will use it internally.
- SetSliceNumber(int slice)
- UseDisplayExtentOn() - default is Off
- SetDisplayExtent(int extent)
Unlike vtkImageActor, this mapper will be able to do cubic interpolation.
This interactor style will be modified so that it can be used for 3D image reslicing. The following methods will be added:
- SetImageOrientation(double horizontal, double vertical);
When the mode is set to 3D, the following bindings will be present:
- Shift-LeftButton - rotate the camera, i.e. do oblique slicing
- Shift-RightButton - move the focal point in and out, i.e. scroll through slices
In both the 2D and 3D modes, the following new key bindings will be present:
- X - sagittal view
- Y - coronal view
- Z - axial view
These keys will change the position and the view-up of the camera. Exactly what view orientations will create the desired ax/cor/sag views will depend on the coordinate system used for the image data. Because of this, the direction cosines for the X, Y, and Z orientations can be set manually with the following methods:
- SetXViewLeftToRight(double vector)
- SetXViewUp(double vector)
- ditto for Y and Z
A note on window/level: vtkInteractorStyleImage will now automatically find the property object of the image that is being displayed and modify it. Because of this, the user to add window/level observers to the interactor style.
Shader programs for compositing
Anyone who uses Photoshop or The Gimp will be familiar with the concept of "layers" and the myriad ways that layers can be composited. It would be very nice if custom fragment shaders could be used to do the same thing with images in VTK. Some of the existing infrastructure for the VTK painters could probably be used for this.
It is very common to use "thick slice" averaging to clean up noisy images, or to use MIP slabs when viewing blood vessels. In both of the mappers described above, it would be easy to use vtkImageProjection to achieve this.
Likelihood: sure thing.
Likewise, it would be easy to take multiple slices of the input image, and reformat them to an NxM grid on a single texture. Any application using this feature would have to be careful to convert points on the grid to the corresponding image point. This would be particularly tricky for picking and getting pixel values.
An image mapper for geometry
It sounds kind of silly, but it could be useful. For example, a FEM could be sliced and displayed as an image. Or a 3D surface contour could be sliced and displayed as an image overlay. The main idea would be to take advantage of the image compositing: there would be no need for the user to make sure that the cutter was set up right and that all the correct depth offset were applied. Instead, the user could just pop the geometry into an image mapper, and the overlay would be perfect every time.
The image mappers are designed to achieve the highest quality result, and although they will probably be fast enough to suit almost anyone, they will definitely slow down if several images are being composited. Hence, it would be nice to have some built-in LOD behaviour similar to the volume mappers. The most obvious speed-up would be to have vtkImageReslice sample the image at a lower resolution, and then have the texture re-interpolate to full resolution. A full-resolution and quater-resolution texture would always be allocated on the GPU, but only the desired texture would be updated and rendered.
Efficiency could also be improved by using 3D texture maps, but I'm not sure if this would be worthwhile. From a quality perspective, 16-bit textures would have to be used for medical images. Special fragment programs would be needed for cubic or other high-order interpolation. Lots of GPU memory would be required, particularly since the only time that such high speed would be needed is if multiple images are being composited. Also, it would be tricky to get it to work correctly with pipeline streaming. The mapper would have to be smart enough to know what parts of the 3D texture would be needed, so that the correct UpdateExtent could be set on the pipeline and the correct SubTexture loaded onto the GPU.
Overall, the current CPU-based implementation is already very fast. A GPU-based implementation does not confer the same advantages for image rendering as it does for volume rendering.