[vtk-developers] RE: [vtkusers] Blending two volumes
Lisa Avila
lisa.avila at kitware.com
Mon Oct 13 22:52:26 EDT 2003
Hi John,
>1) I'd like to render float data without the overhead of the image cast.
>I've had a quick look at the IsoSurface, Mip and Composite ray cast
>functions and they have templates in place for different data types which
>means this shouldn't require enormous changes. Are there any things that I
>should look out for before I start?
Yes - the MIP and composite methods rely on the fact that you can build
tables (of at most 65536 entries) to hold the opacity and color functions.
The data values are directly used as an index into these tables.
>2) I'd like to render two registered volumes - together. ie either
>
>a) dual channel data - one channel for each volume, each with it's own
>lookup table, then a blend of resulting ray cast pixels from the the two
>channels as a final step in the same spirit as vtkImageBlend - the
>downside of this is that one image may be CT or MRI and be very much
>larger (in terms of number of voxels) than the second (say PET) image,
>resampling the PET image to the same dimensions as the other datset may
>result in a large memory footprint that it would be nice to avoid. In
>addition, the PET data will probably be float and have a larger dynamic
>range (so a 2 component tuple may not be suitable).
>
>The upside is that the existing ray casting can be tweaked slightly to
>simply accumulate over separate channels and then blend as the final step.
>I'm guessing this is what kitware have done for volview 2 (NB I haven't
>downloaded it though and looked).
No - the ray caster in VolView 2.0 is not derived from the VTK ray caster.
Also, the "blending" of the two channels is done during sampling along the
ray, not at the end. If you do it at the end, an opaque region in one
channel would not block stuff behind it in the other channel, leading to
serious depth perception issues.
>If the two channels were in fact two field data arrays, then one cound be
>int and the other float allowing images to be joined arbitrarily -
>providing they are the same dimensions/orientation etc
>
>or
>
>b) Leave both datasets as seperate datasets and instead, modify the
>raycast mapper to loop over N(=2) volumes and fire a ray at each one. Each
>resulting pixel value can then be composited in the spirit of
>vtkImageBlend. The upside is that no extra resampling of volumes need be
>done, (and the registered images can simply pass their transforms in to
>allow arbitrary rotation of one relative to the other) - the downside is
>that the time to render will be usually double, since we're really doing
>two ray casts instead of one and two sets of everything regarding the
>ray-volume transformations etc.
Again, I would not recommend blending the values at the end.
>3) I'd like to be able to move cross hairs etc around on the image without
>going through a re-render each time, so I'd want to store the previous
>image/depth map and allow the mapper to skip the recast of rays and just
>do the intermixing step again. Is there any reason why this can't be done?
yes - memory issues. There is no single depth map for volume rendering
(except when rendering an isosurface image). Rather you would have a volume
(a set of samples per pixel on the image) describing the accumulated value
up to a certain depth. You could write a custom ray caster to do this, but
even if you allowed only 10 depths (probably not a pleasing final image but
might make a reasonable interactive image) and your image was 512x512 you
would need something like 10 MB (assuming a uchar RGBA value per depth) to
store this data.
>4) When rendering in interactive mode with (for example) a bounding box
>displayed, there are black square blocks around the regions where box
>lines appear. This is an artefact caused by rendering at reduced
>resolution and then copying across to the larger screen image - skipping
>those buits where intermix geometry tells us there are polygons on screen.
>Is it possible to improve this logic to still copy into the black pixels
>and only skip the actual screnn pixel where the (often thin) line appears.
>I've not looked at this carefully yet.
You are correct, the issue is the fact that the geometry is rendered at one
resolution, and the ray casting may be done at a different resolution. One
ray sample in the ray cast image may cover 16 pixels on the screen (4x4
undersampling) and if the ray hit the bounding box in the z buffer, it was
terminated early due to the intermixed geometry. We use the hardware to
interpolate the image to screen resolution - this provides good speed.
Interpolating in software we can apply some fancier logic but with a loss
of speed.
>----
>
>Does anyone have any thoughts on these approaches and advice on how I
>should proceed. Specifically where I might be barking up the wrong tree.
>Currently I favour the approach of 2b because it has the advantage of not
>requiring any resampling, but the speed hit is going to be nasty and
>perhaps since ram is cheap I'd be advised to go with the 2a approach...
Again, I would not recommend any approach that blends the pixel values
after accumulation. For flexibility you can separate the sampling from the
accumulation, allowing the rays from multiple volumes to be fed into one
accumulator (a bit of a break of the VTK design - might be tough to
implement - but this is what I did in the past in an old volume
visualization system called VolVis which allowed for multiple overlapping
datasets of varying size, dimensions, orientation, etc.)
Good luck,
Lisa
More information about the vtk-developers
mailing list