<div dir="ltr">Hi Matt! <div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-size:13px">If you are working with a current version of ITK, when you get the<br></span><span style="font-size:13px">inverse transform, it should transfer the Center for you. The Center<br></span><span style="font-size:13px">point locations are the FixedParameters for the parent class,<br></span><span style="font-size:13px">MatrixOffsetTransformBase. The relationship with parent classes can be<br></span><span style="font-size:13px">found by examining the Doxygen page for the class [1].</span></blockquote><div><br></div><div>I took a few days to read the RIRE documentation and I came to the conclusion that the direction of registration is not really relevant...taking up CT as the fexed image and MR as the moving image can be used the transformation that provides ITK...without having to calculate the inverse.<br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-size:13px">Do you get the same result by applying the TransformPoint() method of<br></span><span style="font-size:13px">the inverse transform? This is the API call that should be applied to<br></span><span style="font-size:13px">transform a point.</span></blockquote><div><br></div><div>Yes Matt, I get the same results using the API.</div><div><br></div><div>But...I made some changes in my optimizers and I got very good results, like this ( with RegularStepGradientDescentOptimizerv4 and MattesMutualInformationImageToImageMetricv4):</div><div><br></div><div><div>iterations = 200<br></div><div><div>Metric value = -0.709388</div></div><div><div>versor X = 0.0155583</div><div> versor Y = 0.00642035</div><div> versor Z = -0.0487144</div><div> Translation X = 7.82977</div><div> Translation Y = -60.1034</div><div> Translation Z = -23.6258</div></div><div>+-----------------------------------------------------------------------------------------------+</div><div>| X GT| Y GT| Z GT| X R| Y R| Z R|</div><div>-----------------------------------------------------------------------------------------------+</div><div>| -7.573100| -41.253400| -27.309300| -7.661395| -40.915138| -26.044441|</div><div>| 324.872200| -72.815900| -32.906300| 324.712907| -73.345104| -30.833635|</div><div>| 24.160700| 291.039300| -16.272700| 24.902019| 291.325008| -15.874605|</div><div>| 356.606000| 259.476800| -21.869700| 357.276322| 258.895042| -20.663798|</div><div>| -6.055400| -45.115700| 84.613700| -6.394922| -44.465633| 85.892103|</div><div>| 326.389900| -76.678200| 79.016800| 325.979381| -76.895599| 81.102910|</div><div>| 25.678400| 287.176900| 95.650300| 26.168493| 287.774513| 96.061940|</div><div>| 358.123700| 255.614500| 90.053400| 358.542796| 255.344547| 91.272747|</div><div>+-----------------------------------------------------------------------------------------------+</div></div><div>[X, Y, Z]GT are the "<span style="color:rgb(80,0,80);font-size:13px">ground truth</span>" values and [X, Y, Z]R are my results</div><div> </div><div>Now, something I find strange is that when increasing the number of iterations...metric value limprovement is too little but the result is little worse..., example:</div><div><br></div><div><div>Iterations = 334</div><div>Metric value = -0.710918</div></div><div><div> versor X = 0.0216566</div><div> versor Y = 0.00700629</div><div> versor Z = -0.0508766</div><div> Translation X = 7.80722</div><div> Translation Y = -60.5124</div><div> Translation Z = -24.1047</div></div><div><br></div><div><div>+-----------------------------------------------------------------------------------------------+</div><div>| X GT| Y GT| Z GT| X R| Y R| Z R|</div><div>+-----------------------------------------------------------------------------------------------+</div><div>| -7.573100| -41.253400| -27.309300| -8.342271| -39.764911| -28.121895|</div><div>| 324.872200| -72.815900| -32.906300| 323.882938| -73.594962| -33.530625|</div><div>| 24.160700| 291.039300| -16.272700| 25.690487| 292.179801| -13.916412|</div><div>| 356.606000| 259.476800| -21.869700| 357.915696| 258.349750| -19.325143|</div><div>| -6.055400| -45.115700| 84.613700| -7.022108| -44.688304| 83.762051|</div><div>| 326.389900| -76.678200| 79.016800| 325.203101| -78.518355| 78.353321|</div><div>| 25.678400| 287.176900| 95.650300| 27.010650| 287.256408| 97.967534|</div><div>| 358.123700| 255.614500| 90.053400| 359.235859| 253.426357| 92.558803|</div><div>+-----------------------------------------------------------------------------------------------+</div></div><div><br></div><div>This pattern is repeated with other optimizers ( like OnePlusOne and a GA approach that I am implementing ), that you think about it?<br></div><div><br></div><div>Other questions Matt...:<br></div><div><br></div><div>How works the multithreaded in metrics ? is customizable? improve performance? specifically in the case of Mattes Mutual Information...<br></div><div><br></div><div>I tried using the helper CenteredVersorTransformInitializer... but the transformation that generates makes, incredibly and also very strange,the optimizers does not advance...using CenteredTransformInitializer this does not happen...<br></div><div><br></div><div>Really Thanks in advance Matt! <br></div><div>Regards,</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-04-07 15:03 GMT-04:00 Matt McCormick <span dir="ltr"><<a href="mailto:matt.mccormick@kitware.com" target="_blank">matt.mccormick@kitware.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Gabriel!<br>
<span class=""><br>
> I am use RIRE project, specifically CT (movig) and MR_PD (fixed) images.<br>
> Basically, I hava a set of point (in millimeters) of the CT image to which<br>
> apply the trasform result of the registration and updaload this results in<br>
> the web for the evaluation. Example of set of points and his "ground truth"<br>
> :<br>
><br>
> Point x y z new_x new_y new_z<br>
><br>
> 1 0.0000 0.0000 0.0000 -7.5731 -41.2534 -27.3093<br>
> 2 333.9870 0.0000 0.0000 324.8722 -72.8159 -32.9063<br>
> 3 0.0000 333.9870 0.0000 24.1607 291.0393 -16.2727<br>
> 4 333.9870 333.9870 0.0000 356.6060 259.4768 -21.8697<br>
> 5 0.0000 0.0000 112.0000 -6.0554 -45.1157 84.6137<br>
> 6 333.9870 0.0000 112.0000 326.3899 -76.6782 79.0168<br>
> 7 0.0000 333.9870 112.0000 25.6784 287.1769 95.6503<br>
> 8 333.9870 333.9870 112.0000 358.1237 255.6145 90.0534<br>
<br>
</span>Trying to reproduce previous results is a good path forward.<br>
<span class=""><br>
> So, the first I need is the transformation to apply, for that I do the<br>
> following :<br>
><br>
> //get the inverse transform<br>
> TransformType::Pointer inverseTransform = TransformType::New();<br>
> inverseTransform->SetCenter( finalTransform->GetCenter() );<br>
> bool response = finalTransform->GetInverse(inverseTransform);<br>
><br>
><br>
> It makes sense to use the same center in the inverse transform?. A<br>
> "quaternion" define an "axis" (right part) of rotation and an angle to use<br>
> for rotate the image about this axis...why use a center of rotation...?<br>
<br>
</span>If you are working with a current version of ITK, when you get the<br>
inverse transform, it should transfer the Center for you. The Center<br>
point locations are the FixedParameters for the parent class,<br>
MatrixOffsetTransformBase. The relationship with parent classes can be<br>
found by examining the Doxygen page for the class [1].<br>
<span class=""><br>
<br>
> Second, apply this transform...as follows:<br>
><br>
> NewPoint = RotationMatrix * OriginalPoint + Offset<br>
><br>
> The rotation matrix and the offset are obtained from the inverse transforme<br>
> objetc. Found something wrong? something that is not taking into account ?<br>
> The results do not appear to be correct...the calculated error is too big<br>
> and does not correspond with the visual result.<br>
<br>
</span>Do you get the same result by applying the TransformPoint() method of<br>
the inverse transform? This is the API call that should be applied to<br>
transform a point.<br>
<br>
Thanks,<br>
Matt<br>
<br>
<br>
[1] <a href="http://www.itk.org/Doxygen/html/classitk_1_1VersorRigid3DTransform.html" target="_blank">http://www.itk.org/Doxygen/html/classitk_1_1VersorRigid3DTransform.html</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><font color="#666666" size="1"><i><b>Gabriel Alberto Giménez.</b></i></font></div></div>
</div>