[vtkusers] How to apply the camera pose transformation computed using EPnP to the VTK camera?
technOslerphile
dennis.juve at gmail.com
Wed Aug 27 23:05:33 EDT 2014
For my augmented reality project, I have a 3D model viewed using VTK camera
and a real object of the model viewed using a real camera.
I used EPnP to estimate the extrinsic matrix of the real camera (this camera
has already been calibrated before hand, so I know the internal parameters)
by giving 3D points from VTK and its corresponding 2D points from real
camera image and the internal parameters of the real camera for the EPnP
algorithm to work.
After that, I obtained a rotation and translation matrix with the elements
-> R1, R2, R3, ....., R9 and t1, t2 and t3.
So my extrinsic matrix of the real camera looks like this (let's call this
extrinsicReal):
*R1 R2 R3 T1
R4 R5 R6 T2
R7 R8 R9 T3
0 0 0 1*
After this, I estimate the extrinsic matrix of my VTK camera using the
following code:
*vtkSmartPointer<vtkMatrix4x4> extrinsicVTK =
vtkSmartPointer<vtkMatrix4x4>::New();
extrinsicVTK->DeepCopy(renderer->GetActiveCamera()->GetViewTransformMatrix());*
To fuse the VTK camera 3D model with the real camera, the VTK camera should
be set to a position which is same as that of the real camera position and
the focal length of the VTK camera should be same as that of the real
camera. Another important step is to apply the same extrinsic matrix of the
real camera to the VTK camera. How do I do it?
What I did was I took the inverse of the extrinsicReal and multiplied this
with the extrinsicVTK to get a new 4*4 matrix (let's call it newMatrix). I
applied this matrix for the transformation of VTK camera.
*vtkSmartPointer<vtkMatrix4x4> newMatrix =
vtkSmartPointer<vtkMatrix4x4>::New();
vtkMatrix4x4::Multiply4x4(extrinsicRealInvert,extrinsicVTK,newMatrix);*
*vtkSmartPointer<vtkTransform> transform =
vtkSmartPointer<vtkTransform>::New();
transform->SetMatrix(NewM);
transform->Update();
renderer->GetActiveCamera()->ApplyTransform(transform);*
I am not really sure if this is the correct method. But I checked the real
camera position (which I got after EPnP) and the VTK camera position (after
applying the transform above) and they are both exactly same. Also, the
orientation of the real camera and the direction of projection of the VTK
camera are also the same.
The problem is that even after the above parameters are matching for both
the VTK and the real camera, the 3D VTK model does not seem to be perfectly
aligned to the real camera video. Can someone guide me step by step to debug
the issue?
--
View this message in context: http://vtk.1045678.n5.nabble.com/How-to-apply-the-camera-pose-transformation-computed-using-EPnP-to-the-VTK-camera-tp5728473.html
Sent from the VTK - Users mailing list archive at Nabble.com.
More information about the vtkusers
mailing list