[vtkusers] Problems while rendering of large volumes

ianl ilindsay at insigniamedical.co.uk
Tue Sep 12 06:53:22 EDT 2017


Hi all,

Our software uses vtkSmartVolumeMapper to render 3D volumes. I am looking at
a problem with rendering a relatively large, although not uncommon (~2000
slice) CT set. This used to work before the OpenGL2 improvements, but now
fails during the vtkTextureObject::Bind call, leading to a black image.
Smaller volumes render fine.

I am using VTK 8.0.1, built with Visual Studio 2017. The graphics card is an
AMD R7 200 with 2GB of VRAM. The vtkSmartVolume mapper is running in its
auto selection mode, and I can see it choosing the GPU volume mapper when I
step into the code.

What no longer seems to be happening is the reduction resampling when the
texture is determined to be too large for the card (using GPULowResMapper,
LowResGPUNecessary, etc). This seems to be due to the GetReductionRatio()
function on the GPU mapper returning a hard coded scale of 1 (in
vtkOpenGLGPUVolumeRayCastMapper.h, VolumeOpenGL2 variant).

Is this expected behaviour, or am I missing something? As an aside, I have
attempted to use the SetPartitions() function on
vtkOpenGLGPUVolumeRayCastMapper to partition the volume so it will fit on
the card which does seem to work, but kills performance, so I would probably
prefer the original behaviour, i.e. resampling, while interacting with the
volume. I seem to remember reading somewhere that this was an area that was
being looked at at some point.

Can anyone shed any light on how to proceed with solving this? The only way
I can see currently is to implement our own version of smart volume mapper
which does the reduction, and perhaps uses the partition technique when not
interacting.

Thanks in advance,
Ian Lindsay



--
Sent from: http://vtk.1045678.n5.nabble.com/VTK-Users-f1224199.html


More information about the vtkusers mailing list