[vtkusers] EyePosition / EyeTransform, GPU Volume Rendering, and Off-Axis Projection
Krzysztof Kamieniecki
krys at kamieniecki.com
Mon Nov 18 11:08:24 EST 2013
Better late then never? Someone asked me off list for my patch so I am
posting it here as well. I haven't tested this particular patch, because I
had to remove some other changes I made, but I think this will work.
On Wed, Aug 1, 2012 at 1:01 PM, Aashish Chaudhary <
aashish.chaudhary at kitware.com> wrote:
> Hi,
>
> Very nice.. Could you please send me the patch?
>
> Thanks,
>
>
> On Fri, Jul 6, 2012 at 5:24 PM, Krzysztof Kamieniecki
> <krys at kamieniecki.com> wrote:
> > I have a patch available (i'm not sure where to send it). I changed
> > vtkCamera and vtkOpenGLGPUVolumeRayCastMapper to produce and expect
> > GetEyePosition() to give the actual eye position in world space. This
> seems
> > to fix my problem. I think there may still be a clipping issue because of
> > the use of GetEyePlaneNormal in vtkRenderer. I have to fix some of my own
> > code to get more testing done and I would like to hear more about the
> > original intent of EyePosition before making any more changes.
> >
> > On Fri, Jul 6, 2012 at 1:54 PM, Krzysztof Kamieniecki <
> krys at kamieniecki.com>
> > wrote:
> >>
> >> Hi Aashish,
> >>
> >> I would like to try to take a stab at it.
> >>
> >> Based on the only use of GetEyePosition in VTK it seems like
> >> GetEyePosition should return the world space positions. (This
> expectation
> >> also explains the problems I have been seeing with weird clipping)
> >>
> >> from vtkRenderer::ResetCameraClippingRange
> >> ...
> >> if(!this->ActiveCamera->GetUseOffAxisProjection())
> >> {
> >> this->ActiveCamera->GetViewPlaneNormal(vn);
> >> this->ActiveCamera->GetPosition(position);
> >> this->ExpandBounds(bounds,
> >> this->ActiveCamera->GetModelTransformMatrix());
> >> }
> >> else
> >> {
> >> this->ActiveCamera->GetEyePosition(position);
> >> this->ActiveCamera->GetEyePlaneNormal(vn);
> >> this->ExpandBounds(bounds,
> >> this->ActiveCamera->GetModelViewTransformMatrix());
> >> }
> >> ...
> >>
> >> My current understanding is that the OffAxisProjection perspective
> >> transformation matrix is being used by the GPU Renderer to setup to
> outer
> >> polygons.
> >>
> >> So the main thing to do would be to produce world coordinate Eye
> Positions
> >> and use those instead of the Camera Position in the volume Renderer, so
> the
> >> shaders would get the proper ray directions?
> >>
> >> What is the desired functionality of EyeTransform? WorldToEye,
> EyeToWorld,
> >> CameraToEye, EyeToCamera?
> >>
> >> Best Regards,
> >> Krys
> >>
> >> On Fri, Jul 6, 2012 at 1:41 PM, Aashish Chaudhary
> >> <aashish.chaudhary at kitware.com> wrote:
> >>>
> >>> Hi Krzysztof,
> >>>
> >>> On Fri, Jul 6, 2012 at 1:12 PM, Krzysztof Kamieniecki
> >>> <krys at kamieniecki.com> wrote:
> >>> > Hi,
> >>> >
> >>> > How is the EyePosition and EyeTransform functionality supposed to
> >>> > behave,
> >>> > should the positions be in CameraSpace or WorldSpace?
> >>> >
> >>> > Should they be syncronized with EyeSeparation / LeftEye when in
> >>> > Off-Axis
> >>> > Projection mode?
> >>>
> >>> >
> >>> > I am trying to get the GPU Volume renderer to work with Stereo 3D in
> >>> > Off-Axis projection mode, one problem seems to be that the Camera
> >>> > position
> >>> > is used instead of the Eye position.
> >>>
> >>> This is a known problem and based on my prior discussions with volume
> >>> rendering experts here at Kitware, I believe that this needs to be
> >>> fixed on the volume rendering side of things.
> >>> If you want to make it work by yourself then we could provide you
> >>> guidance for this task.
> >>>
> >>> Thanks,
> >>>
> >>>
> >>>
> >>> >
> >>> > Thanks,
> >>> > Krys
> >>> >
> >>> > _______________________________________________
> >>> > Powered by www.kitware.com
> >>> >
> >>> > Visit other Kitware open-source projects at
> >>> > http://www.kitware.com/opensource/opensource.html
> >>> >
> >>> > Please keep messages on-topic and check the VTK FAQ at:
> >>> > http://www.vtk.org/Wiki/VTK_FAQ
> >>> >
> >>> > Follow this link to subscribe/unsubscribe:
> >>> > http://www.vtk.org/mailman/listinfo/vtkusers
> >>> >
> >>>
> >>>
> >>>
> >>> --
> >>> | Aashish Chaudhary
> >>> | R&D Engineer
> >>> | Kitware Inc.
> >>> | www.kitware.com
> >>
> >>
> >
> >
> > _______________________________________________
> > Powered by www.kitware.com
> >
> > Visit other Kitware open-source projects at
> > http://www.kitware.com/opensource/opensource.html
> >
> > Please keep messages on-topic and check the VTK FAQ at:
> > http://www.vtk.org/Wiki/VTK_FAQ
> >
> > Follow this link to subscribe/unsubscribe:
> > http://www.vtk.org/mailman/listinfo/vtkusers
> >
>
>
>
> --
> | Aashish Chaudhary
> | R&D Engineer
> | Kitware Inc.
> | www.kitware.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.vtk.org/pipermail/vtkusers/attachments/20131118/9543cec9/attachment.htm>
-------------- next part --------------
diff --git a/Rendering/Core/vtkCamera.cxx b/Rendering/Core/vtkCamera.cxx
index 2dfc45d..4ff4eb7 100644
--- a/Rendering/Core/vtkCamera.cxx
+++ b/Rendering/Core/vtkCamera.cxx
@@ -1564,9 +1564,27 @@ void vtkCamera::GetEyePosition(double eyePosition[3])
return;
}
- eyePosition[0] = this->EyeTransformMatrix->GetElement(0, 3);
- eyePosition[1] = this->EyeTransformMatrix->GetElement(1, 3);
- eyePosition[2] = this->EyeTransformMatrix->GetElement(2, 3);
+ //The Eye position starts at the camera position
+ eyePosition[0] = this->Position[0];
+ eyePosition[1] = this->Position[1];
+ eyePosition[2] = this->Position[2];
+
+ //in Off Axis Projection, the Eye Separation dicates where the eye is
+ if (this->UseOffAxisProjection)
+ {
+ double es = 0.0;
+
+ //Get offset based on which Eye it is
+ if(this->LeftEye)
+ es = -(this->EyeSeparation / 2.0);
+ else
+ es = this->EyeSeparation / 2.0;
+
+ //Use the "sideways" vector from the ModelView transform to shift the eye position
+ eyePosition[0] += es * this->ModelViewTransform->GetMatrix()->GetElement(0,0);
+ eyePosition[1] += es * this->ModelViewTransform->GetMatrix()->GetElement(0,1);
+ eyePosition[2] += es * this->ModelViewTransform->GetMatrix()->GetElement(0,2);
+ }
}
//-----------------------------------------------------------------------------
@@ -1581,10 +1599,10 @@ void vtkCamera::GetEyePlaneNormal(double normal[3])
// Homogeneous normal.
double localNormal[4];
- // Get the normal from the screen orientation.
- localNormal[0] = this->WorldToScreenMatrix->GetElement(2, 0);
- localNormal[1] = this->WorldToScreenMatrix->GetElement(2, 1);
- localNormal[2] = this->WorldToScreenMatrix->GetElement(2, 2);
+ // Get the normal from Model / View transform.
+ localNormal[0] = this->ModelViewTransform->GetMatrix()->GetElement(2, 0);
+ localNormal[1] = this->ModelViewTransform->GetMatrix()->GetElement(2, 1);
+ localNormal[2] = this->ModelViewTransform->GetMatrix()->GetElement(2, 2);
localNormal[3] = 0.0;
// Just to be sure.
diff --git a/Rendering/OpenGL/vtkXGPUInfoList.cxx b/Rendering/OpenGL/vtkXGPUInfoList.cxx
index 262ae57..9811c46 100644
--- a/Rendering/OpenGL/vtkXGPUInfoList.cxx
+++ b/Rendering/OpenGL/vtkXGPUInfoList.cxx
@@ -70,9 +70,10 @@ void vtkXGPUInfoList::Probe()
{
if(XNVCTRLIsNvScreen(dpy,i))
{
- int ramSize;
+ int ramSizeInt;
Bool status=XNVCTRLQueryAttribute(dpy,i,0,
- NV_CTRL_VIDEO_RAM,&ramSize);
+ NV_CTRL_VIDEO_RAM,&ramSizeInt);
+ vtkIdType ramSize = ramSizeInt; //needed because Card can have more the 2GB of ram
if(!status)
{
ramSize=0;
diff --git a/Rendering/VolumeOpenGL/vtkOpenGLGPUVolumeRayCastMapper.cxx b/Rendering/VolumeOpenGL/vtkOpenGLGPUVolumeRayCastMapper.cxx
index ba4ff3d..923db35 100644
--- a/Rendering/VolumeOpenGL/vtkOpenGLGPUVolumeRayCastMapper.cxx
+++ b/Rendering/VolumeOpenGL/vtkOpenGLGPUVolumeRayCastMapper.cxx
@@ -3290,6 +3290,9 @@ void vtkOpenGLGPUVolumeRayCastMapper::SetupRender(vtkRenderer *ren,
glClearColor(0.0, 0.0, 0.0, 0.0); // maxvalue is 1
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
+
+ // CAMERA VIEW TRANSFORM IS NO LONGER DONE HERE, it is done by the vtkOpenGLCamera/ctkCamera
+
//double aspect[2];
//ren->ComputeAspect();
//ren->GetAspect(aspect);
@@ -3319,9 +3322,9 @@ void vtkOpenGLGPUVolumeRayCastMapper::SetupRender(vtkRenderer *ren,
glPushMatrix();
this->TempMatrix[0]->DeepCopy(vol->GetMatrix());
this->TempMatrix[0]->Transpose();
-
- // insert camera view transformation
glMultMatrixd(this->TempMatrix[0]->Element[0]);
+
+ // Setup additional OpenGL rendering option
glShadeModel(GL_SMOOTH);
glDisable( GL_LIGHTING);
glEnable (GL_CULL_FACE);
@@ -3387,7 +3390,8 @@ void vtkOpenGLGPUVolumeRayCastMapper::ClipBoundingBox(vtkRenderer *ren,
this->InvVolumeMatrix->Invert();
// Normals should be transformed using the transpose of the
// invert of InvVolumeMatrix.
- vtkMatrix4x4::Transpose(vol->GetMatrix(),this->TempMatrix[0]);
+ vtkMatrix4x4::Transpose(this->InvVolumeMatrix,this->TempMatrix[0]);
if(this->BoxSource==0)
{
@@ -3411,7 +3415,7 @@ void vtkOpenGLGPUVolumeRayCastMapper::ClipBoundingBox(vtkRenderer *ren,
double camPos[4];
double camPlaneNormal[4];
- cam->GetPosition(camWorldPos);
+ cam->GetEyePosition(camWorldPos);
camWorldPos[3] = 1.0;
this->InvVolumeMatrix->MultiplyPoint( camWorldPos, camPos );
if ( camPos[3] )
@@ -3432,7 +3436,7 @@ void vtkOpenGLGPUVolumeRayCastMapper::ClipBoundingBox(vtkRenderer *ren,
camWorldDirection[3] = 1.0;
// Compute the normalized near plane normal
- this->TempMatrix[0]->MultiplyPoint( camWorldDirection, camPlaneNormal );
+ this->InvVolumeMatrix->MultiplyPoint( camWorldDirection, camPlaneNormal );
vtkMath::Normalize(camWorldDirection);
vtkMath::Normalize(camPlaneNormal);
@@ -3459,6 +3463,7 @@ void vtkOpenGLGPUVolumeRayCastMapper::ClipBoundingBox(vtkRenderer *ren,
camNearPoint[0] /= camNearPoint[3];
camNearPoint[1] /= camNearPoint[3];
camNearPoint[2] /= camNearPoint[3];
+ camNearPoint[3] = 1.0;
}
this->InvVolumeMatrix->MultiplyPoint( camFarWorldPoint, camFarPoint );
@@ -3467,6 +3472,7 @@ void vtkOpenGLGPUVolumeRayCastMapper::ClipBoundingBox(vtkRenderer *ren,
camFarPoint[0] /= camFarPoint[3];
camFarPoint[1] /= camFarPoint[3];
camFarPoint[2] /= camFarPoint[3];
+ camFarPoint[3] = 1.0;
}
@@ -3497,7 +3503,6 @@ void vtkOpenGLGPUVolumeRayCastMapper::ClipBoundingBox(vtkRenderer *ren,
this->NearPlane->SetOrigin( camNearPoint );
this->NearPlane->SetNormal( camPlaneNormal );
this->Planes->AddItem(this->NearPlane);
-
if ( this->ClippingPlanes )
{
this->ClippingPlanes->InitTraversal();
@@ -5075,7 +5080,7 @@ void vtkOpenGLGPUVolumeRayCastMapper::RenderRegions(vtkRenderer *ren,
double distance2[27];
double camPos[4];
- ren->GetActiveCamera()->GetPosition(camPos);
+ ren->GetActiveCamera()->GetEyePosition(camPos);
double volBounds[6];
this->GetInput()->GetBounds(volBounds);
@@ -5581,7 +5586,7 @@ int vtkOpenGLGPUVolumeRayCastMapper::RenderSubVolume(vtkRenderer *ren,
// so that we are in the same coordinate system
double camPos[4];
vtkCamera *cam = ren->GetActiveCamera();
- cam->GetPosition(camPos);
+ cam->GetEyePosition(camPos);
volume->GetMatrix( this->InvVolumeMatrix );
camPos[3] = 1.0;
this->InvVolumeMatrix->Invert();
@@ -6029,7 +6034,7 @@ void vtkOpenGLGPUVolumeRayCastMapper::LoadProjectionParameters(
// the coordinates are translated and rescaled
double cameraPosTexture[4];
- ren->GetActiveCamera()->GetPosition(cameraPosWorld);
+ ren->GetActiveCamera()->GetEyePosition(cameraPosWorld);
cameraPosWorld[3]=1.0; // we use homogeneous coordinates.
datasetToWorld->MultiplyPoint(cameraPosWorld,cameraPosDataset);
@@ -6240,7 +6245,6 @@ void vtkOpenGLGPUVolumeRayCastMapper::BuildProgram(vtkRenderWindow *w,
int shadeMethod,
int componentMethod)
{
-
assert("pre: valid_raycastMethod" &&
raycastMethod>= vtkOpenGLGPUVolumeRayCastMapperMethodMIP
&& raycastMethod<=vtkOpenGLGPUVolumeRayCastMapperMethodAdditive);
More information about the vtkusers
mailing list