[vtkusers] Cuda based volume rendering integration into VTK and Slicer3
Benjamin Grauer
bensch at bwh.harvard.edu
Tue Dec 18 10:11:23 EST 2007
Greetings,
I am working on the integration of a Cuda based volume rendering
technique into VTK and as being new to VTK, I am overwhelmed by the huge
amount of possibilities the API supports.
Currently the code that is independent of VTK is pretty slim and you can
take a look at it here:
http://svn.orxonox.net/subprojects/volrenSample/
I also tried to integrate the code into VTK and Slicer3. You may find
that code in my Slicer3 branche here:
http://www.na-mic.org/svn/Slicer3/branches/cuda/Modules/VolumeRenderingCuda
Here is the headers for the Cuda rendering algorithm out of
CUDA_renderAlgo.h that I want to integrate into a VTK wrapper class:
// initialize the size
void CUDArenderAlgo_init(int sizeX, int sizeY, int sizeZ, int dsizeX,
int dsizeY);
// Load the data to the Cuda Device
void CUDArenderAlgo_loadData(unsigned char* sourceData, int sizeX, int
sizeY, int sizeZ);
// Renders the Image and produces a 2D array of the output
void CUDArenderAlgo_doRender(float* rotationMatrix, float* color, float*
minmax, float* lightVec, int sizeX, int sizeY, int sizeZ, int dsizeX,
int dsizeY, float dispX, float dispY, float dispZ, float voxelSizeX,
float voxelSizeY, float voxelSizeZ, int minThreshold, int maxThreshold,
int sliceDistance);
// retrieves the rendered result in RAW data as Unsigned chars
void CUDArenderAlgo_getResult(unsigned char** resultImagePointer, int
dsizeX, int dsizeY);
// frees all memory
void CUDArenderAlgo_delete();
(from now on I will refer to the function names leaving the
'CUDArenderAlgo' away for instance _init)
As I understand it from a VTK point of view, I need the following:
1. An imageDataReader to read Volume data as any kind of Volume Data
2. A Filter converting from the reader output to a cuda-able DataSet
3. A new vtkDataSet call it vtkCudaDataSet, where I will put my Volume
Data used by the _init(), _loadData() and _delete() functions
4. A new vtkVolumeMapper that renders the scene using the lighting
model, a prepared Z buffer, a camera position and the before mentioned
DataSet to produce a image using the _doRender() function and the
_getResult() function to acquire the produced data.
* A vtkTexture and a Plane to render the result to
5. An actor that places the volume into the scene
5. Chain this pipeline together and attach it to a rendering window.
Now, it would be very helpful for me, to know what classes I have derive
from to get the above described behaviour.
Also these points are not really clear to me:
1. How do I get the camera position of a rendering window?
2. How do I get the Lightning Information for the scene?
3. How do I display the texture planar to the viewing direction at the
correct place in space?
4. Is there a simple way to test everything I am doing using VTK and one
render window?
I understand that these are a lot of questions, but any help is appreciated.
Best Regards,
Benjamin Grauer,
Surgical Planing Laboratory <http://www.spl.harvard.edu/>,
Brigham and Women's Hospital
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.vtk.org/pipermail/vtkusers/attachments/20071218/2aa370ad/attachment.htm>
More information about the vtkusers
mailing list