[vtkusers] Blending two volumes

John Biddiscombe john.biddiscombe at mirada-solutions.com
Fri Oct 10 06:40:30 EDT 2003


JB Wrote...

> Will the changes you've presumably been making for volview 2 
> be making their way into the general vtk repository? 

OK Since there was a strict radio silence on that one, I'll assume the answer is no. The reason I ask is because

1) I'd like to render float data without the overhead of the image cast. I've had a quick look at the IsoSurface, Mip and Composite ray cast functions and they have templates in place for different data types which means this shouldn't require enormous changes. Are there any things that I should look out for before I start?

2) I'd like to render two registered volumes - together. ie either

a) dual channel data - one channel for each volume, each with it's own lookup table, then a blend of resulting ray cast pixels from the the two channels as a final step in the same spirit as vtkImageBlend - the downside of this is that one image may be CT or MRI and be very much larger (in terms of number of voxels) than the second (say PET) image, resampling the PET image to the same dimensions as the other datset may result in a large memory footprint that it would be nice to avoid. In addition, the PET data will probably be float and have a larger dynamic range (so a 2 component tuple may not be suitable).

The upside is that the existing ray casting can be tweaked slightly to simply accumulate over separate channels and then blend as the final step. I'm guessing this is what kitware have done for volview 2 (NB I haven't downloaded it though and looked).
If the two channels were in fact two field data arrays, then one cound be int and the other float allowing images to be joined arbitrarily - providing they are the same dimensions/orientation etc

or

b) Leave both datasets as seperate datasets and instead, modify the raycast mapper to loop over N(=2) volumes and fire a ray at each one. Each resulting pixel value can then be composited in the spirit of vtkImageBlend. The upside is that no extra resampling of volumes need be done, (and the registered images can simply pass their transforms in to allow arbitrary rotation of one relative to the other) - the downside is that the time to render will be usually double, since we're really doing two ray casts instead of one and two sets of everything regarding the ray-volume transformations etc.

3) I'd like to be able to move cross hairs etc around on the image without going through a re-render each time, so I'd want to store the previous image/depth map and allow the mapper to skip the recast of rays and just do the intermixing step again. Is there any reason why this can't be done? 

4) When rendering in interactive mode with (for example) a bounding box displayed, there are black square blocks around the regions where box lines appear. This is an artefact caused by rendering at reduced resolution and then copying across to the larger screen image - skipping those buits where intermix geometry tells us there are polygons on screen. Is it possible to improve this logic to still copy into the black pixels and only skip the actual screnn pixel where the (often thin) line appears. I've not looked at this carefully yet.

----

Does anyone have any thoughts on these approaches and advice on how I should proceed. Specifically where I might be barking up the wrong tree. Currently I favour the approach of 2b because it has the advantage of not requiring any resampling, but the speed hit is going to be nasty and perhaps since ram is cheap I'd be advised to go with the 2a approach...

[If anyone can suggest better methods for the display of multiple volumes, then I welcome references/links etc.]

thanks 

JB





More information about the vtkusers mailing list