From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users From ghostcz at hotmail.com Tue Dec 2 16:21:47 2014 From: ghostcz at hotmail.com (louie L) Date: Tue, 2 Dec 2014 22:21:47 +0100 Subject: [Rtk-users] Input and output image buffer Message-ID: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie From simon.rit at creatis.insa-lyon.fr Wed Dec 3 03:31:28 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 09:31:28 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > Dear RTK users and developers, > > I am writing a backprojection filter whose superclass is > ImageToImageFilter. After allocating the output, I called > this->GetInput()->GetBufferPointer() and > this->GetOutput()->GetBufferPointer(). > to get the address of the images in memory. However the two functions > above return the same value. Why? If this is not the correct way to get the > address of the input image, how can I get that address? > Thank you. > > Best regards, > Louie > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Wed Dec 3 09:27:40 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Wed, 3 Dec 2014 15:27:40 +0100 Subject: [Rtk-users] Geometry import and detector displacement Message-ID: Dear all, I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. Each calibration matrix is a direct 3D world to 2D buffer index matrix. Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. The pinhole camera model I used could be find here at p18 of the pdf. I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. So I think it is easy to find all the rotation angle, and the sid distance as well Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. What I do not understand is: -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. -Why reconstruction aren't working at all I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). Thank you in advance for you help, and sorry for the long mail -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: calibration_reelle.xml Type: text/xml Size: 135704 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 3 10:46:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 3 Dec 2014 16:46:16 +0100 Subject: [Rtk-users] SimpleRTK: wrappings for Python, C#, ... Message-ID: Dear RTK users, It is my pleasure to announce that I have merged in the master branch of the public repository our developpements for RTK wrappings in Python and other languages. The mechanism is based on SimpleITK and all necessary information should be available on the wiki page of SimpleRTK . If you start using it, you will quickly notice that many filters are not wrapped yet. However, it is very easy in my experience to add some wrappings, as explained on the wiki page. Please, don't hesitate to send comments, suggestions and new wrappings. I will be happy to answer any question and to incorporate suggested changes. Enjoy and thanks in advance for your help! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostcz at hotmail.com Wed Dec 3 11:33:34 2014 From: ghostcz at hotmail.com (ghostcz) Date: Wed, 3 Dec 2014 17:33:34 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi Simon, Yes, it solved the problem. There are some more related questions. Filters like backprojectionFilter have more than one input. As it is an InPlaceFilter, it will overwrite the input. But which input will be updated? From the existing filters, it seems it is the input( 0 ). Is this defined somewhere? Can I change this? If I query the buffer of input(1), will I get the correct address? Another one: if I pass an ITK image pointer to a function instead of defining this image as an input, will I run into the same problem? Does it have an impact on speed and ram consumption? Thank you! Best regards, Louie From: Simon Rit Sent: Wednesday, December 03, 2014 9:31 AM To: louie L Cc: rtk-users at public.kitware.com Subject: Re: [Rtk-users] Input and output image buffer Hi Louie, What you do is correct and what you obtain is expected. BackProjectionImageFilter inherits from InPlaceImageFilter. InPlaceImageFilter overwrites the input by default. If you don't want this behavior, you can simply call InPlaceOff before updating. Then , the buffers will be indeed pointing to different memory spaces. Hope this helps, Simon On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: Dear RTK users and developers, I am writing a backprojection filter whose superclass is ImageToImageFilter. After allocating the output, I called this->GetInput()->GetBufferPointer() and this->GetOutput()->GetBufferPointer(). to get the address of the images in memory. However the two functions above return the same value. Why? If this is not the correct way to get the address of the input image, how can I get that address? Thank you. Best regards, Louie _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com http://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:15:58 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:15:58 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Thibault, It is going to be challenging... but we'll try to do our best to help you. One important question is: what coordinates system are used by your 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the tomography and the projections), which is defined in ITK by the origin (coordinate of the center of the first pixel), the spacing, the direction. Defining this information in your images is very important to have accurate results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin of your projectionscoordinate system at the center of the projections, have you Your reconstruction example looks indeed completely wrong. Have you tried to backproject one projection only and to check that it is as expected? By the way, the AddProjection of the image works in degrees, you should use AddProjectionInRadians otherwise. Don't hesitate to share a dataset if you want us to help further. Simon On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault wrote: > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 03:42:11 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 09:42:11 +0100 Subject: [Rtk-users] Input and output image buffer In-Reply-To: References: Message-ID: Hi, Maybe we should explain that on the wiki, we'll prepare a page. In the meantime, a quick answer. InPlaceImageFilter modifies the first input (#0). Backprojection updates a volume from projection images, so the first input is the same as the output, the volume. Forward projection updates projection images from a volume so the first input is the same as the output, the projections. I do not see how you could modify this, could you give an example of why you would do that? Yes, you can get the buffer pointer to the second input with filt->GetInput(1)->GetBufferPointer(). For the second part, I don't know what is the problem but if you could play with buffer pointers, I would try to avoid this if I were you because you then lose the pipeline capabilities of ITK filters. I hope this helps, Simon On Wed, Dec 3, 2014 at 5:33 PM, ghostcz wrote: > Hi Simon, > > Yes, it solved the problem. > There are some more related questions. Filters like backprojectionFilter > have more than one input. As it is an InPlaceFilter, it will overwrite the > input. But which input will be updated? From the existing filters, it seems > it is the input( 0 ). Is this defined somewhere? Can I change this? If I > query the buffer of input(1), will I get the correct address? > Another one: if I pass an ITK image pointer to a function instead of > defining this image as an input, will I run into the same problem? Does it > have an impact on speed and ram consumption? > Thank you! > > Best regards, > Louie > > *From:* Simon Rit > *Sent:* Wednesday, December 03, 2014 9:31 AM > *To:* louie L > *Cc:* rtk-users at public.kitware.com > *Subject:* Re: [Rtk-users] Input and output image buffer > > Hi Louie, > What you do is correct and what you obtain is expected. > BackProjectionImageFilter inherits from InPlaceImageFilter. > InPlaceImageFilter overwrites the input by default. If you don't want this > behavior, you can simply call InPlaceOff > > before updating. Then , the buffers will be indeed pointing to different > memory spaces. > Hope this helps, > Simon > > On Tue, Dec 2, 2014 at 10:21 PM, louie L wrote: > >> Dear RTK users and developers, >> >> I am writing a backprojection filter whose superclass is >> ImageToImageFilter. After allocating the output, I called >> this->GetInput()->GetBufferPointer() and >> this->GetOutput()->GetBufferPointer(). >> to get the address of the images in memory. However the two functions >> above return the same value. Why? If this is not the correct way to get the >> address of the input image, how can I get that address? >> Thank you. >> >> Best regards, >> Louie >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Dec 4 05:57:10 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 4 Dec 2014 11:57:10 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hoi Thibault, Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. I hope I did not make any mistake in this long description? Regards, Chao 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > Dear all, > > I am currently trying to import data generated with a custom tomographic > system into RTK, and I am facing issues whith this task. > > The system projection matrix is transparently calibrated, and the > calibration process give a 3*4 projection matrix for each acquisition > position. > Each calibration matrix is a direct 3D world to 2D buffer index matrix. > > Using the pinhole model, I tried to factorize this matrix as the product > of various submatrix, including a 3D centered Euler transform, using this > note as stated > in rtkReg23Geometry.cxx. > The pinhole camera model I used could be find here > at p18 of the pdf. > I think that the way I factorized the matrix is correct, and match the > GantryAngle/InPlanAngle/OutOfPlanAngle model described here > . > > My problem arise when I try to model the x/z tilt of the detector: when > decomposing my projection matrix into different matrix, each modelling a > system coordinate change, I have: > - a world coordinate system to source centered system matrix (modeling > euler 3D rotation and also translation from isocenter to source) > - a source centered system to 2D buffer index matrix modeling source > to detector and pixel size scaling and then detector translation (U0,V0) > > As I understand, the pinhole model should allow a perfect fit with the RTK > geometry model in the following sense: > Extrinsinc parameters matrix correspond to the SourceTranslationM and > RotationM in RTK, assuming that the order of the rotation follows RTK > reference. And the translation in z should be replaced by zero, as it > correspond to source-isocenter distance, and is taken into accounts in the > magnification step. > So I think it is easy to find all the rotation angle, and the sid distance > as well > > Intrinsics parameters matrix could be decomposed in order to find the > focal (or source detector distance) and the projection offset, from the U0, > V0 parameters, substracting the detector half size in each direction. > > What I do not understand is: > -In the rtk documentation, it is stated that "The detector position is > defined with respect to the source" but the ProjectionTranslationM in rtk > contains a term in sourceOffsetX-projOffsetX although sourceOffset has > already been taken into account earlier. > -Why reconstruction aren't working at all > > I enclosed you a sample of geometry file I have generated that provide > some acceptable result when used for phantom projection, but provide > totally wrong reconstruction when reconstructing my image data with sart > (sample image taken from a reconstructed volume). > > Thank you in advance for you help, and sorry for the long mail > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig1.png Type: image/png Size: 4357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fig2.png Type: image/png Size: 6105 bytes Desc: not available URL: From arnheim66 at googlemail.com Thu Dec 4 06:09:42 2014 From: arnheim66 at googlemail.com (Arnheim Blanchr) Date: Thu, 4 Dec 2014 12:09:42 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter Message-ID: Dear All I have a question regarding the forward projectors. It seems that at the boundary integration starts at mid-voxel which makes it difficult for me to compare with our own implemention since information is partly lost. Can I somehow setup the projectors such that all (full) voxel are integrated? Thanks a lost Arne From simon.rit at creatis.insa-lyon.fr Thu Dec 4 08:40:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 14:40:53 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: ITK goes from voxel coordinates v to physical coordinates x with the following formulas x = d*s*v + o where s is a diagonal nxn matrix with the spacing on the diagonal, d is the nxn direction matrix to allow rotations and o is the origin (n is the dimension of your space). I don't know if / where it is documented but that would be in the ITK documentation. I typically look at the code directly (function TransformIndexToPhysicalPoint). Probably Direction is not the problem in your case and the default identity is correct but it's something you should probably know about. I'm a bit lost in your geometric descriptions but that should not be so difficult to find the RTK transformation. If you know the position of your source, the position of the origin of the coordinate system of your detector image and the direction of the two axes of your detector, all these in the tomography coordinate system, rtk::Reg23ProjectionGeometry::AddReg23Projection does the decomposition for you... Simon On Thu, Dec 4, 2014 at 10:35 AM, Notargiacomo Thibault wrote: > Thank you Simon, > To answer your questions: > My 3*4 matrix allow to change from a world coordinate system, whose origin > correspond to the isocenter in rtk, to an image buffer index. > > But I decompose this matrix in order to isolate the wcs to acquisition > plane, and this projection coordinate system is indeed centered in the > middle of the projection plane, that correspond to the orthogonal > projection of the focal point. > > I am aware of that fact, this I why, I took care to perform the following > in rtk code: > inputImage->SetOrigin( origin ); > inputImage->SetSpacing( spacing ); > > With origin a point that correspond to: > ( - half_detector_sizeX_in_mm/2, -half_detector_sizeY_in_mm/2, 0 ) > and Spacing, a vector that contains > (detector_pixel_sizeX_in_mm, detector_pixel_sizeY_in_mm, 1 ) > > But I did not set the direction vector, is there a document where I can > find what value I have to set it to, according to my acquisition geometry ? > > Thank you for your help, > > Kind Regards > > Thibault Notargiacomo > > 2014-12-04 9:15 GMT+01:00 Simon Rit : > >> Hi Thibault, >> It is going to be challenging... but we'll try to do our best to help >> you. One important question is: what coordinates system are used by your >> 3*4 matrices. RTK uses the ITK coordinate system for its images (i.e., the >> tomography and the projections), which is defined in ITK by the origin >> (coordinate of the center of the first pixel), the spacing, the direction. >> Defining this information in your images is very important to have accurate >> results. In the DEA.pdf file that you've provided, Fig1.1 shows an origin >> of your projectionscoordinate system at the center of the projections, have >> you >> Your reconstruction example looks indeed completely wrong. Have you tried >> to backproject one projection only and to check that it is as expected? >> By the way, the AddProjection of the image works in degrees, you should >> use AddProjectionInRadians otherwise. >> Don't hesitate to share a dataset if you want us to help further. >> Simon >> >> On Wed, Dec 3, 2014 at 3:27 PM, Notargiacomo Thibault < >> gnthibault at gmail.com> wrote: >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 10:30:02 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 16:30:02 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi, Good point. Since we interpolate, we chose the model that you mention. A simple trick that should work is to add a 0 border around your volume. That will allow you to compare your results. Out of curiosity, what's your projector? If it's Siddon, that would make sense but I wonder what you do if it's an interpolation model (Joseph, trilinear, etc). Simon On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr wrote: > Dear All > > I have a question regarding the forward projectors. It seems that at > the boundary integration starts at mid-voxel which makes it difficult > for me to compare with our own implemention since information is > partly lost. > > Can I somehow setup the projectors such that all (full) voxel are > integrated? > > Thanks a lost > Arne > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnthibault at gmail.com Thu Dec 4 13:17:23 2014 From: gnthibault at gmail.com (Notargiacomo Thibault) Date: Thu, 4 Dec 2014 19:17:23 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: Hi Chao, and thank you for this detailed answer, If I understand well this sentence: *"For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?."* The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. But... When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: *"the projection offset is just the distance from the corner to D"* An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. This information could help me to determine if my projectionOffset should be negative or positive. About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: *Origin point:* ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) the coordinates in Z is a bit odd but why not ? *Spacing* (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) Direction: a classic 3*3 identity matrix This is exactly the kind of value I use when importing my images in rtk. Thank you for your time, and help Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. Kind regards Thibault Notargiacomo 2014-12-04 11:57 GMT+01:00 Chao Wu : > Hoi Thibault, > > Source offset appearing several times is because of a different view of > one kind of detector rotation. A detector can have three kinds of > rotations: the in-plane rotation defined in RTK is about z axis, the > out-of-plane rotation defined in RTK is about x axis, and there should be > another out-of-plane rotation about y axis. Assuming a zero out-of-plane > rotation about x, Fig 1 gives an common example of the rotation about y > together with definitions of sid and sdd in some systems. I guess this > figure may be more familiar and straightforward to some people. > > However RTK sees this differently. Since this out-of-plane rotation about > y can be in fact merged into the gantry angle, it is ignored in RTK. On the > other hand, parameters should be defined differently than that in Fig 1 to > represent this detector change, as shown in Fig 2: an ?ideal? source is > positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, > and AB is the size of the source offset. The origin of the detector is not > at the intersection F with the oblique ray AEF, but at the intersection D > with the perpendicular ray BED from the ?ideal? source B. The perpendicular > ray AC from the real source A intersects the detector at C differing from D > by CD or AB, the source offset, which is the reason that you see the source > offset appears again in the projection translation matrix. If the in-plane > rotation of the detector is zero, this source offset only has x element, > otherwise it contains both x and y elements. lastly, the size of projection > offset is the distance between the origin of the projection image and the > origin of the detector (point D). For many ?normal? 2D image format the > origin of the image is just at the first pixel (one corner), so the size of > the projection offset is just the distance from the corner to D and has > nothing to do with things like ?detector half size?. > > In fact the out-of-plane rotation about x has a similar effect in RTK > (causing shifts of source and detector origin, and changes of sid and sdd, > etc. compared with the point of view of the Fig 1 style), although this > angle itself is also needed for rotating the world coordinates. > > I hope I did not make any mistake in this long description? > > Regards, > Chao > > > 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : > >> Dear all, >> >> I am currently trying to import data generated with a custom tomographic >> system into RTK, and I am facing issues whith this task. >> >> The system projection matrix is transparently calibrated, and the >> calibration process give a 3*4 projection matrix for each acquisition >> position. >> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >> >> Using the pinhole model, I tried to factorize this matrix as the product >> of various submatrix, including a 3D centered Euler transform, using this >> note as stated >> in rtkReg23Geometry.cxx. >> The pinhole camera model I used could be find here >> at p18 of the >> pdf. >> I think that the way I factorized the matrix is correct, and match the >> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >> . >> >> My problem arise when I try to model the x/z tilt of the detector: when >> decomposing my projection matrix into different matrix, each modelling a >> system coordinate change, I have: >> - a world coordinate system to source centered system matrix >> (modeling euler 3D rotation and also translation from isocenter to source) >> - a source centered system to 2D buffer index matrix modeling source >> to detector and pixel size scaling and then detector translation (U0,V0) >> >> As I understand, the pinhole model should allow a perfect fit with the >> RTK geometry model in the following sense: >> Extrinsinc parameters matrix correspond to the SourceTranslationM and >> RotationM in RTK, assuming that the order of the rotation follows RTK >> reference. And the translation in z should be replaced by zero, as it >> correspond to source-isocenter distance, and is taken into accounts in the >> magnification step. >> So I think it is easy to find all the rotation angle, and the sid >> distance as well >> >> Intrinsics parameters matrix could be decomposed in order to find the >> focal (or source detector distance) and the projection offset, from the U0, >> V0 parameters, substracting the detector half size in each direction. >> >> What I do not understand is: >> -In the rtk documentation, it is stated that "The detector position is >> defined with respect to the source" but the ProjectionTranslationM in rtk >> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >> already been taken into account earlier. >> -Why reconstruction aren't working at all >> >> I enclosed you a sample of geometry file I have generated that provide >> some acceptable result when used for phantom projection, but provide >> totally wrong reconstruction when reconstructing my image data with sart >> (sample image taken from a reconstructed volume). >> >> Thank you in advance for you help, and sorry for the long mail >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Thu Dec 4 15:37:16 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 4 Dec 2014 21:37:16 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: rtksimulatedgeometry assumes a centered projection so in this case, the source, center-of-rotation and projection (0,0) points are aligned and offsets are 0. The Z coordinate of the origin of the projection stack is not used and irrelevant. Your observation that it is odd is correct but it's harmless. I still think that using Reg23 is much simpler than decomposing the matrix but it's up to you. For example, the directions of the vector of the projection axes are the lines of your projection matrix if I'm not mistaking. If you still want to decompose, I think you should have a look at how Phil did it: rtk::Reg23ProjectionGeometry.txx. Again, would you be able to provide a dataset to get some help, that would be much easier for us to help you. Good luck, Simon On Thu, Dec 4, 2014 at 7:17 PM, Notargiacomo Thibault wrote: > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > *"For many ?normal? 2D image format the origin of the image is just at the > first pixel (one corner), so the size of the projection offset is just the > distance from the corner to D and has nothing to do with things like > ?detector half size?."* > The projection offset correspond exactly to the scaled U0,V0 parameters of > the intrinsic matrix of the pinhole model, and in my understanding, they > should be close to half detector size if all the out of plane rotations are > negligible. > But... > When I generate a perfect geometry, without out of plane angles, > with rtksimulatedgeometry, it appear that projection offsets are set to > zero, so I think I have not understood this sentence: > *"the projection offset is just the distance from the corner to D"* > > An other aspect that puzzled my, is that I can't find documentation about > what is the orientation of the u axis and v axis of the detector coordinate > system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should > be negative or positive. > > About the images geometric data, I tried to use rtkprojectgeometricphantom > with my geometry in order to see what origin, spacing and direction are > attributed to the output image, and whithout surprise I experienced the > following behaviour: > > *Origin point:* > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, > -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > *Spacing* > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, > etc... would require to perform the exact same steps of geometric matrix > decomposition I already use for the classic RTK geometric parameters plus > some more, so I think it would only add complexity and probably useless > steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : > >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of >> one kind of detector rotation. A detector can have three kinds of >> rotations: the in-plane rotation defined in RTK is about z axis, the >> out-of-plane rotation defined in RTK is about x axis, and there should be >> another out-of-plane rotation about y axis. Assuming a zero out-of-plane >> rotation about x, Fig 1 gives an common example of the rotation about y >> together with definitions of sid and sdd in some systems. I guess this >> figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about >> y can be in fact merged into the gantry angle, it is ignored in RTK. On the >> other hand, parameters should be defined differently than that in Fig 1 to >> represent this detector change, as shown in Fig 2: an ?ideal? source is >> positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, >> and AB is the size of the source offset. The origin of the detector is not >> at the intersection F with the oblique ray AEF, but at the intersection D >> with the perpendicular ray BED from the ?ideal? source B. The perpendicular >> ray AC from the real source A intersects the detector at C differing from D >> by CD or AB, the source offset, which is the reason that you see the source >> offset appears again in the projection translation matrix. If the in-plane >> rotation of the detector is zero, this source offset only has x element, >> otherwise it contains both x and y elements. lastly, the size of projection >> offset is the distance between the origin of the projection image and the >> origin of the detector (point D). For many ?normal? 2D image format the >> origin of the image is just at the first pixel (one corner), so the size of >> the projection offset is just the distance from the corner to D and has >> nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK >> (causing shifts of source and detector origin, and changes of sid and sdd, >> etc. compared with the point of view of the Fig 1 style), although this >> angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic >>> system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the >>> calibration process give a 3*4 projection matrix for each acquisition >>> position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product >>> of various submatrix, including a 3D centered Euler transform, using this >>> note as >>> stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here >>> at p18 of the >>> pdf. >>> I think that the way I factorized the matrix is correct, and match the >>> GantryAngle/InPlanAngle/OutOfPlanAngle model described here >>> . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when >>> decomposing my projection matrix into different matrix, each modelling a >>> system coordinate change, I have: >>> - a world coordinate system to source centered system matrix >>> (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source >>> to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the >>> RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and >>> RotationM in RTK, assuming that the order of the rotation follows RTK >>> reference. And the translation in z should be replaced by zero, as it >>> correspond to source-isocenter distance, and is taken into accounts in the >>> magnification step. >>> So I think it is easy to find all the rotation angle, and the sid >>> distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the >>> focal (or source detector distance) and the projection offset, from the U0, >>> V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is >>> defined with respect to the source" but the ProjectionTranslationM in rtk >>> contains a term in sourceOffsetX-projOffsetX although sourceOffset has >>> already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide >>> some acceptable result when used for phantom projection, but provide >>> totally wrong reconstruction when reconstructing my image data with sart >>> (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recons_attempt.jpg Type: image/jpeg Size: 7162 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Dec 5 03:39:07 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 5 Dec 2014 09:39:07 +0100 Subject: [Rtk-users] Geometry import and detector displacement In-Reply-To: References: Message-ID: see below 2014-12-04 19:17 GMT+01:00 Notargiacomo Thibault : > > Hi Chao, and thank you for this detailed answer, > If I understand well this sentence: > "For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?." > The projection offset correspond exactly to the scaled U0,V0 parameters of the intrinsic matrix of the pinhole model, and in my understanding, they should be close to half detector size if all the out of plane rotations are negligible. > But... > When I generate a perfect geometry, without out of plane angles, with rtksimulatedgeometry, it appear that projection offsets are set to zero, so I think I have not understood this sentence: > "the projection offset is just the distance from the corner to D" The projection offset is the offset of the image origin from the detector origin (the orthogonal projection of the isocenter on the detector). For a perfect geometry, rtksimulatedgeometry assumes that both image origin and detector origin are at the center so the projection offset is zero. But as I said, in many normal 2D image format like .png, .tif, and .bmp, the image origin is not defined, and ITK/RTK uses the first pixel as the image origin. In this case the size of the projection offset is then the distance between the first pixel and the detector origin. If the latter is at the detector centre, the projection offset will be half detector size. The sign depends on which quadrant of the detector coordinate system the first pixel sits in. > > An other aspect that puzzled my, is that I can't find documentation about what is the orientation of the u axis and v axis of the detector coordinate system (assuming a a 0 gantry angle) regarding the world coordinate system. > This information could help me to determine if my projectionOffset should be negative or positive. Without any rotation (gantry and detector), the detector coordinate system is perfectly aligned with the object coordinate system: detector_x // object_x, detector_y // object_y, and the detector origin is the orthogonal projection of the object origin on the detector plane. Then, there is another mapping from the image coordinate system to the detector coordinate system. I have already explained the relationship between the image origin and the detector origin above. How the image axis (u and v) orientated with regard to the detector axis (x and y) depends on the direction cosines of the image. Again, this information does not exist in many 2D image format and the default value in ITK/RTK is an identity matrix, so u/v and x/y are also aligned. > > About the images geometric data, I tried to use rtkprojectgeometricphantom with my geometry in order to see what origin, spacing and direction are attributed to the output image, and whithout surprise I experienced the following behaviour: > > Origin point: > ( - half_detector_size_in_mm/2, -half_detector_size_in_mm/2, -half_detector_size_in_mm/2 ) > the coordinates in Z is a bit odd but why not ? > Spacing > (detector_pixel_size_in_mm, detector_pixel_size_in_mm, 1 ) > Direction: > a classic 3*3 identity matrix > > This is exactly the kind of value I use when importing my images in rtk. > > Thank you for your time, and help > > Simon: finding the position of the origin of the detector, and directions, etc... would require to perform the exact same steps of geometric matrix decomposition I already use for the classic RTK geometric parameters plus some more, so I think it would only add complexity and probably useless steps to the process. > > Kind regards > > Thibault Notargiacomo > > > 2014-12-04 11:57 GMT+01:00 Chao Wu : >> >> Hoi Thibault, >> >> Source offset appearing several times is because of a different view of one kind of detector rotation. A detector can have three kinds of rotations: the in-plane rotation defined in RTK is about z axis, the out-of-plane rotation defined in RTK is about x axis, and there should be another out-of-plane rotation about y axis. Assuming a zero out-of-plane rotation about x, Fig 1 gives an common example of the rotation about y together with definitions of sid and sdd in some systems. I guess this figure may be more familiar and straightforward to some people. >> >> However RTK sees this differently. Since this out-of-plane rotation about y can be in fact merged into the gantry angle, it is ignored in RTK. On the other hand, parameters should be defined differently than that in Fig 1 to represent this detector change, as shown in Fig 2: an ?ideal? source is positioned at B, sid is BE instead of AE, sdd is BD or AC instead of AF, and AB is the size of the source offset. The origin of the detector is not at the intersection F with the oblique ray AEF, but at the intersection D with the perpendicular ray BED from the ?ideal? source B. The perpendicular ray AC from the real source A intersects the detector at C differing from D by CD or AB, the source offset, which is the reason that you see the source offset appears again in the projection translation matrix. If the in-plane rotation of the detector is zero, this source offset only has x element, otherwise it contains both x and y elements. lastly, the size of projection offset is the distance between the origin of the projection image and the origin of the detector (point D). For many ?normal? 2D image format the origin of the image is just at the first pixel (one corner), so the size of the projection offset is just the distance from the corner to D and has nothing to do with things like ?detector half size?. >> >> In fact the out-of-plane rotation about x has a similar effect in RTK (causing shifts of source and detector origin, and changes of sid and sdd, etc. compared with the point of view of the Fig 1 style), although this angle itself is also needed for rotating the world coordinates. >> >> I hope I did not make any mistake in this long description? >> >> Regards, >> Chao >> >> >> 2014-12-03 15:27 GMT+01:00 Notargiacomo Thibault : >>> >>> Dear all, >>> >>> I am currently trying to import data generated with a custom tomographic system into RTK, and I am facing issues whith this task. >>> >>> The system projection matrix is transparently calibrated, and the calibration process give a 3*4 projection matrix for each acquisition position. >>> Each calibration matrix is a direct 3D world to 2D buffer index matrix. >>> >>> Using the pinhole model, I tried to factorize this matrix as the product of various submatrix, including a 3D centered Euler transform, using this note as stated in rtkReg23Geometry.cxx. >>> The pinhole camera model I used could be find here at p18 of the pdf. >>> I think that the way I factorized the matrix is correct, and match the GantryAngle/InPlanAngle/OutOfPlanAngle model described here . >>> >>> My problem arise when I try to model the x/z tilt of the detector: when decomposing my projection matrix into different matrix, each modelling a system coordinate change, I have: >>> - a world coordinate system to source centered system matrix (modeling euler 3D rotation and also translation from isocenter to source) >>> - a source centered system to 2D buffer index matrix modeling source to detector and pixel size scaling and then detector translation (U0,V0) >>> >>> As I understand, the pinhole model should allow a perfect fit with the RTK geometry model in the following sense: >>> Extrinsinc parameters matrix correspond to the SourceTranslationM and RotationM in RTK, assuming that the order of the rotation follows RTK reference. And the translation in z should be replaced by zero, as it correspond to source-isocenter distance, and is taken into accounts in the magnification step. >>> So I think it is easy to find all the rotation angle, and the sid distance as well >>> >>> Intrinsics parameters matrix could be decomposed in order to find the focal (or source detector distance) and the projection offset, from the U0, V0 parameters, substracting the detector half size in each direction. >>> >>> What I do not understand is: >>> -In the rtk documentation, it is stated that "The detector position is defined with respect to the source" but the ProjectionTranslationM in rtk contains a term in sourceOffsetX-projOffsetX although sourceOffset has already been taken into account earlier. >>> -Why reconstruction aren't working at all >>> >>> I enclosed you a sample of geometry file I have generated that provide some acceptable result when used for phantom projection, but provide totally wrong reconstruction when reconstructing my image data with sart (sample image taken from a reconstructed volume). >>> >>> Thank you in advance for you help, and sorry for the long mail >>> >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >>> >> > From simon.rit at creatis.insa-lyon.fr Fri Dec 5 08:39:53 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 5 Dec 2014 14:39:53 +0100 Subject: [Rtk-users] Area of Integration JosephForwardProjectionImageFilter RayCastInterpolatorForwardProjectionImageFilter In-Reply-To: References: Message-ID: Hi Steffen, I'm not sure I understand it all but isn't this due to interpolation? If you were using a finer voxelized box as input, the difference between siddon and joseph should decrease. Regarding tracking every step, yes, you should be able to do such things (and if you are not, I'm open to modify the code). We have done some similar work in Gate using RTK. This is not public yet but the idea is to implement specific functor for Joseph. You should look at the code and the two TInterpolationWeightMultiplication and TProjectedValueAccumulation template in particular. If you want an example, I'll send you a copy of what we've done in Gate. Simon On Fri, Dec 5, 2014 at 9:50 AM, Steffen Lukas wrote: > Sorry, mail went out too quickly. > > > > > Hi Simon > > I check against my quick ray-tracer-implementation in Siddon style. > > I tried the enlarged volume with 0-boundary already before, but cant > resolve the issue completely. > > I put an example below, for some reason I get signal at the outer > detetectors where there should be none. > > Also: Can I somehow keep track of the voxel traversed in your code > (for dosimetric and simulation applications). > > > > > > Example: > > > double sid = 100, aid = 20; > int nproj = 1; > double first_angle = 0, angular_arc = 360; > > volume_spacing(1, 1, 1); > volume_center(0.0, 0.0, 0.0); > volume_size(3, 3, 3); > > projection_center(0.0, 0.0, 0.0); > projection_size(5, 5, nproj); > projection_spacing(1, 1, 1.0); > > > The projections are: > > (1) Joseph projector > > z: 0 > 0: 1: 2: 3: 4: > 0: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > 1: 1.000174 3.000208 3.000104 3.000208 1.000174 > 2: 1.000139 3.000104 3 3.000104 1.000139 > 3: 1.000174 3.000208 3.000104 3.000208 1.000174 > 4: 0.3339816 1.000174 1.000139 1.000174 0.3339816 > > > (2) My Raytracer: > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > (3) RayBox Integration (fom -1.5 to 1.5) > > z: 0 > 0: 1: 2: 3: 4: > 0: 0 0 0 0 0 > 1: 0 3.000208 3.000104 3.000208 0 > 2: 0 3.000104 3 3.000104 0 > 3: 0 3.000208 3.000104 3.000208 0 > 4: 0 0 0 0 0 > > Value except at the boundary coincide, only at the detector boundary > there is signal that I dont understand > > Rgds > Steffen > > > > 2014-12-05 9:46 GMT+01:00, Steffen Lukas : >> Hi Simon >> >> I check against my quick ray-tracer-implementation in Siddon style. >> >> I tried the enlarged volume with 0-boundary already before, but cant >> resolve the issue completely. >> >> I put an example below, for some reason I get signal at the outer >> detetectors where there should be none. >> >> Also: Can I somehow keep track of the voxel traversed in your code >> (for dosimetric and simulation applications). >> >> Arne >> >> >> >> Example: >> >> >> double sid = 100, aid = 20; >> int nproj = 1; >> double first_angle = 0, angular_arc = 360; >> >> volume_spacing(1, 1, 1); >> volume_center(0.0, 0.0, 0.0); >> volume_size(3, 3, 3); >> >> projection_center(0.0, 0.0, 0.0); >> int3 projection_size(5, 5, nproj); >> vect3 projection_spacing(1, 1, 1.0); >> matr3 projection_direction = matr3::Identity(); >> >> >> 2014-12-04 16:30 GMT+01:00, Simon Rit : >>> Hi, >>> Good point. Since we interpolate, we chose the model that you mention. A >>> simple trick that should work is to add a 0 border around your volume. >>> That >>> will allow you to compare your results. >>> Out of curiosity, what's your projector? If it's Siddon, that would make >>> sense but I wonder what you do if it's an interpolation model (Joseph, >>> trilinear, etc). >>> Simon >>> >>> On Thu, Dec 4, 2014 at 12:09 PM, Arnheim Blanchr >>> >>> wrote: >>> >>>> Dear All >>>> >>>> I have a question regarding the forward projectors. It seems that at >>>> the boundary integration starts at mid-voxel which makes it difficult >>>> for me to compare with our own implemention since information is >>>> partly lost. >>>> >>>> Can I somehow setup the projectors such that all (full) voxel are >>>> integrated? >>>> >>>> Thanks a lost >>>> Arne >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> http://public.kitware.com/mailman/listinfo/rtk-users >>>> >>> >> From spollmann at robarts.ca Tue Dec 9 19:39:41 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Tue, 9 Dec 2014 19:39:41 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue Message-ID: <5487964D.5070601@robarts.ca> A recent update to rtkMacro.h seems to have caused the ggo command line processor to ignore command line flags. (i.e. I can't get any verbose output with '-v'). It seems to happen after making a second call to: cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) Removing this second call, has resolved the issue for me. I'm not sure, however, what the intended use of the second call was for (it occurs immediately after: args_params.check_required = 1; which I feel could just be moved above the first call, as it happens regardless, but I may be missing something. I've attached my quickly modified rtkMacro.h for comparison to the latest github commit. Anyhow, hopefully this info is useful, and doesn't only affect me. Steve Our system setup: -Ubuntu 14.04 x64 -gcc 4.8.2 -cuda 6.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: rtkMacro.h Type: text/x-chdr Size: 6578 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 03:53:40 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 09:53:40 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: <54880A14.6070601@creatis.insa-lyon.fr> Hi Steven, Thanks a lot for having tracked the issue. I had the same problem and didn't know where to start to diagnose it. So yes, this info is useful. I do not know why this second call has been added, though. Cyril On 12/10/2014 01:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was > for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 04:01:06 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 10:01:06 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5487964D.5070601@robarts.ca> References: <5487964D.5070601@robarts.ca> Message-ID: Hi, Thanks for the report, very useful information. I could reproduce the bug and I hope that I have fixed it. Briefly: - I have changed the code because Ben Champion reported memory leaks and I noticed that they occured in deprecated functions of gengetopt that I don't use anymore, - the way the new macro (as well as the previous one) is written is: first read the command line to find if a config file is passed, then read the config file and finally read the command line again to check that everything has been passed. - your fix was not perfect because we would not have checked that the required options were set, - it turns out that disabling the override option did the job. Everything sworks fine now but let met know if you notice something wrong again. Thanks again, Simon On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann wrote: > A recent update to rtkMacro.h seems to have caused the ggo command line > processor to ignore command line flags. (i.e. I can't get any verbose > output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call was for > (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it happens > regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the latest > github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From padraig.looney at gmail.com Wed Dec 10 06:59:36 2014 From: padraig.looney at gmail.com (Padraig Looney) Date: Wed, 10 Dec 2014 11:59:36 +0000 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering Message-ID: Dear list, We have been using RTK to reconstruct some digital breast tomosynthesis images. The reconstruction using BackProjectionImageFilter looks good. The only issue we are having is in specifying the coordinates of the reconstructed volume. The coordinate system is attached and the code we use to reconstruct is below. I expected the origin of the first slice in the reconstructed volume to be at (w,-h/2,offset). What I find is that the reconstructed volume is shifted in the y direction by about half the height (but not exactly). The X position looks correct for this phantom. rtkBackProjectionImageFilter is described as ?implementation of the back projection step of the FDK also for *filtered* back projection reconstruction for cone-beam CT images with a circular source trajectory?. However, I could not find any filtering of data in the code. Could you please confirm if there is filtering in this code and what type of filters there are (ramp, Hann etc)? Also, is the difference with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is for cone beam while rtkBackProjectionImageFilter is not? // Create reconstructed image typedef rtk::ConstantImageSource< FloatImageType > ConstantImageSourceType; ConstantImageSourceType::PointType origin; ConstantImageSourceType::SpacingType spacing; ConstantImageSourceType::SizeType sizeOutput; ConstantImageSourceType::DirectionType direction; direction.SetIdentity(); sizeOutput[0] = 1890; //1747; //1890; as found in dicom info sizeOutput[1] = 2457; //as found in dicom info sizeOutput[2] = 1; //as found in dicom info double offset(26.27); // Gap between detector and sample origin[0] = 171.99; origin[1] = -223/2; //223 is the height of the reconstructed volume origin[2] = offset+0; spacing[0] = 0.091; spacing[1] = 0.091; spacing[2] = 1; direction [0][0] = -1; direction [0][1] = 0; direction [0][2] = 0; direction [1][0] = 0; direction [1][1] = 1; direction [1][2] = 0; direction [2][0] = 0; direction [2][1] = 0; direction [2][2] = 1; ConstantImageSourceType::Pointer constantImageSource = ConstantImageSourceType::New(); constantImageSource->SetOrigin( origin ); constantImageSource->SetSpacing( spacing ); constantImageSource->SetSize( sizeOutput ); constantImageSource->SetConstant( 0. ); constantImageSource->SetDirection(direction); const ImageType::DirectionType& direct = constantImageSource->GetDirection(); std::cout <<"Direction3DZeroMatrix= " << std::endl; std::cout << direct << std::endl; std::cout << "Performing reconstruction" << std::endl; //BackProjection recontruction (no filtering) typedef rtk::ProjectionGeometry<3> ProjectionGeometry; ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> FDKCPUType; FDKCPUType::Pointer feldkamp = FDKCPUType::New(); feldkamp->SetInput( 0, constantImageSource->GetOutput() ); feldkamp->SetInput( 1, imageStack); feldkamp->SetGeometry( baseGeom ); feldkamp->Update(); -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconstruct.pdf Type: application/pdf Size: 12356 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 10 07:35:19 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 10 Dec 2014 13:35:19 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: References: Message-ID: <54883E07.9060308@creatis.insa-lyon.fr> Hi Padraig, I can only answer part of your questions, sorry about the others: neither rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform filtering, and both are cone-beam. In fact, at the moment, cone-beam is the only geometry available in RTK. The difference is that rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, and redefines some methods (I think it performs a specific weighting of projection data depending on the distance to the central plane, as described in the FDK paper, but I cannot say for sure). As far as I know, there is no all-in-one filter for FDK in RTK. You have to plug the filters together yourself, the same way it is done in the rtkfdk application, and the back projection filter you must then use is either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. If you wish to design iterative reconstruction algorithms, on the other hand, use the non-FDK back projection filters. Without filtering, your reconstruction is probably very blurry. I would advise you to try to convert your data to the ITK standard mhd and raw, and to use the rtkfdk application. Once you get a good reconstruction out-of-the-box with your data, you can start playing with internal filters. Regards, Cyril On 12/10/2014 12:59 PM, Padraig Looney wrote: > Dear list, > > We have been using RTK to reconstruct some digital breast > tomosynthesis images. The reconstruction using > BackProjectionImageFilter looks good. The only issue we are having is > in specifying the coordinates of the reconstructed volume. The > coordinate system is attached and the code we use to reconstruct is > below. I expected the origin of the first slice in the reconstructed > volume to be at (w,-h/2,offset). What I find is that the reconstructed > volume is shifted in the y direction by about half the height (but not > exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as "implementation of the > back projection step of the FDK also for *_filtered_* back projection > reconstruction for cone-beam CT images with a circular source > trajectory". However, I could not find any filtering of data in the > code. Could you please confirm if there is filtering in this code and > what type of filters there are (ramp, Hann etc)? Also, is the > difference with rtkBackProjectionImageFilter that > rtkFDKBackProjectionImageFilter is for cone beam while > rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Dec 10 10:54:29 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 10 Dec 2014 16:54:29 +0100 Subject: [Rtk-users] positioning of the reconstructed volume and some questions on filtering In-Reply-To: <54883E07.9060308@creatis.insa-lyon.fr> References: <54883E07.9060308@creatis.insa-lyon.fr> Message-ID: Hi, Please refer to my previous post to understand the coordinates of your volume: http://public.kitware.com/pipermail/rtk-users/2014-December/000634.html That should explain your coordinate system. Cyril is right, there is no filtering in the FDKBackProjectionImageFilter and the BackProjectionImageFilter. Both work for perspective projections but they also work for parallel beams (and give then the same result). Simon On Wed, Dec 10, 2014 at 1:35 PM, Cyril Mory wrote: > Hi Padraig, > > I can only answer part of your questions, sorry about the others: neither > rtkBackProjectionImageFilter nor rtkFDKBackProjectionImageFilter perform > filtering, and both are cone-beam. In fact, at the moment, cone-beam is the > only geometry available in RTK. The difference is that > rtkFDKBackProjectionImageFilter inherits from rtkBackProjectionImageFilter, > and redefines some methods (I think it performs a specific weighting of > projection data depending on the distance to the central plane, as > described in the FDK paper, but I cannot say for sure). > As far as I know, there is no all-in-one filter for FDK in RTK. You have > to plug the filters together yourself, the same way it is done in the > rtkfdk application, and the back projection filter you must then use is > either rtkFDKBackProjectionImageFilter or its CUDA ou OPENCL counterpart. > If you wish to design iterative reconstruction algorithms, on the other > hand, use the non-FDK back projection filters. > > Without filtering, your reconstruction is probably very blurry. I would > advise you to try to convert your data to the ITK standard mhd and raw, and > to use the rtkfdk application. Once you get a good reconstruction > out-of-the-box with your data, you can start playing with internal filters. > > Regards, > Cyril > > > On 12/10/2014 12:59 PM, Padraig Looney wrote: > > Dear list, > > We have been using RTK to reconstruct some digital breast tomosynthesis > images. The reconstruction using BackProjectionImageFilter looks good. The > only issue we are having is in specifying the coordinates of the > reconstructed volume. The coordinate system is attached and the code we use > to reconstruct is below. I expected the origin of the first slice in the > reconstructed volume to be at (w,-h/2,offset). What I find is that the > reconstructed volume is shifted in the y direction by about half the height > (but not exactly). The X position looks correct for this phantom. > > rtkBackProjectionImageFilter is described as ?implementation of the back > projection step of the FDK also for *filtered* back projection > reconstruction for cone-beam CT images with a circular source trajectory?. > However, I could not find any filtering of data in the code. Could you > please confirm if there is filtering in this code and what type of filters > there are (ramp, Hann etc)? Also, is the difference > with rtkBackProjectionImageFilter that rtkFDKBackProjectionImageFilter is > for cone beam while rtkBackProjectionImageFilter is not? > > > // Create reconstructed image > typedef rtk::ConstantImageSource< FloatImageType > > ConstantImageSourceType; > ConstantImageSourceType::PointType origin; > ConstantImageSourceType::SpacingType spacing; > ConstantImageSourceType::SizeType sizeOutput; > ConstantImageSourceType::DirectionType direction; > direction.SetIdentity(); > > sizeOutput[0] = 1890; //1747; //1890; as found in dicom info > sizeOutput[1] = 2457; //as found in dicom info > sizeOutput[2] = 1; //as found in dicom info > > double offset(26.27); // Gap between detector and sample > origin[0] = 171.99; > origin[1] = -223/2; //223 is the height of the reconstructed volume > origin[2] = offset+0; > > spacing[0] = 0.091; > spacing[1] = 0.091; > spacing[2] = 1; > > direction [0][0] = -1; > direction [0][1] = 0; > direction [0][2] = 0; > direction [1][0] = 0; > direction [1][1] = 1; > direction [1][2] = 0; > direction [2][0] = 0; > direction [2][1] = 0; > direction [2][2] = 1; > > ConstantImageSourceType::Pointer constantImageSource = > ConstantImageSourceType::New(); > > constantImageSource->SetOrigin( origin ); > constantImageSource->SetSpacing( spacing ); > constantImageSource->SetSize( sizeOutput ); > constantImageSource->SetConstant( 0. ); > constantImageSource->SetDirection(direction); > > const ImageType::DirectionType& direct = > constantImageSource->GetDirection(); > > std::cout <<"Direction3DZeroMatrix= " << std::endl; > std::cout << direct << std::endl; > > std::cout << "Performing reconstruction" << std::endl; > > //BackProjection recontruction (no filtering) > typedef rtk::ProjectionGeometry<3> ProjectionGeometry; > ProjectionGeometry::Pointer baseGeom = geometry.GetPointer(); > typedef rtk::BackProjectionImageFilter< ImageType ,ImageType> > FDKCPUType; > FDKCPUType::Pointer feldkamp = FDKCPUType::New(); > feldkamp->SetInput( 0, constantImageSource->GetOutput() ); > feldkamp->SetInput( 1, imageStack); > feldkamp->SetGeometry( baseGeom ); > feldkamp->Update(); > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spollmann at robarts.ca Wed Dec 10 15:27:02 2014 From: spollmann at robarts.ca (Steven Pollmann) Date: Wed, 10 Dec 2014 15:27:02 -0500 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: References: <5487964D.5070601@robarts.ca> Message-ID: <5488AC96.3090803@robarts.ca> That makes sense, thanks for the quick usage explanation, and fix. (Disabling the override issue makes sense, and I didn't have time to trace through gengetopt. I thought I was missing something, as none of the non-flag arguments were being reset (to null, or default values, and thus thought 'override' meant something else!). Thanks again, glad the info was helpful. Steve On 14-12-10 4:01 AM, Simon Rit wrote: > Hi, > Thanks for the report, very useful information. I could reproduce the > bug and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks > and I noticed that they occured in deprecated functions of gengetopt > that I don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then > read the config file and finally read the command line again to check > that everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something > wrong again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > > A recent update to rtkMacro.h seems to have caused the ggo command > line processor to ignore command line flags. (i.e. I can't get any > verbose output with '-v'). > It seems to happen after making a second call to: > > cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, > &args_params) > > Removing this second call, has resolved the issue for me. > I'm not sure, however, what the intended use of the second call > was for (it occurs immediately after: > > args_params.check_required = 1; > > which I feel could just be moved above the first call, as it > happens regardless, but I may be missing something. > > I've attached my quickly modified rtkMacro.h for comparison to the > latest github commit. > > Anyhow, hopefully this info is useful, and doesn't only affect me. > > Steve > > Our system setup: > -Ubuntu 14.04 x64 > -gcc 4.8.2 > -cuda 6.5 > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Fri Dec 12 08:10:51 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 12 Dec 2014 14:10:51 +0100 Subject: [Rtk-users] rtkMacro.h GGO issue In-Reply-To: <5488AC96.3090803@robarts.ca> References: <5487964D.5070601@robarts.ca> <5488AC96.3090803@robarts.ca> Message-ID: My fix did not work. Cyril (Mory) reported that multiple options were read twice. I hope this new fix will work but don't hesitate to report other issues with gengetopt. Thanks again for you reports, Simon On Wed, Dec 10, 2014 at 9:27 PM, Steven Pollmann wrote: > > That makes sense, thanks for the quick usage explanation, and fix. > (Disabling the override issue makes sense, and I didn't have time to trace > through gengetopt. I thought I was missing something, as none of the > non-flag arguments were being reset (to null, or default values, and thus > thought 'override' meant something else!). > > Thanks again, glad the info was helpful. > > Steve > > > On 14-12-10 4:01 AM, Simon Rit wrote: > > Hi, > Thanks for the report, very useful information. I could reproduce the bug > and I hope that I have fixed it. Briefly: > - I have changed the code because Ben Champion reported memory leaks and > I noticed that they occured in deprecated functions of gengetopt that I > don't use anymore, > - the way the new macro (as well as the previous one) is written is: > first read the command line to find if a config file is passed, then read > the config file and finally read the command line again to check that > everything has been passed. > - your fix was not perfect because we would not have checked that the > required options were set, > - it turns out that disabling the override option did the job. > Everything sworks fine now but let met know if you notice something wrong > again. Thanks again, > Simon > > On Wed, Dec 10, 2014 at 1:39 AM, Steven Pollmann > wrote: > >> A recent update to rtkMacro.h seems to have caused the ggo command line >> processor to ignore command line flags. (i.e. I can't get any verbose >> output with '-v'). >> It seems to happen after making a second call to: >> >> cmdline_parser_##ggo_filename##_ext(argc, argv, &args_info, &args_params) >> >> Removing this second call, has resolved the issue for me. >> I'm not sure, however, what the intended use of the second call was for >> (it occurs immediately after: >> >> args_params.check_required = 1; >> >> which I feel could just be moved above the first call, as it happens >> regardless, but I may be missing something. >> >> I've attached my quickly modified rtkMacro.h for comparison to the latest >> github commit. >> >> Anyhow, hopefully this info is useful, and doesn't only affect me. >> >> Steve >> >> Our system setup: >> -Ubuntu 14.04 x64 >> -gcc 4.8.2 >> -cuda 6.5 >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Fri Dec 12 12:42:26 2014 From: lomahu at gmail.com (Howard) Date: Fri, 12 Dec 2014 12:42:26 -0500 Subject: [Rtk-users] ADMMTVReconstruction Message-ID: I am testing the ADMM total variation reconstruction with sparse data sample. I could reconstruct but the results were not as good as expected. In other words, it didn't show much improvement compared to fdk reconstruction using the same sparse projection data. The parameters I used in ADMMTV were the following: --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 while the fdk reconstruction parameters are: --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 The dimensions were chosen to include the entire anatomy. 72 projections were selected out of 646 projections for a 360 degree scan for both calculations. What parameters and how can I adjust (like alpha, beta, or iterations?) to improve the ADMMTV reconstruction? There is not much description of this application from the wiki page. Thanks, -howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Mon Dec 15 04:07:45 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Mon, 15 Dec 2014 10:07:45 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: Message-ID: <548EA4E1.4090801@creatis.insa-lyon.fr> Hello Howard, Good to hear that you're using RTK :) I'll try to answer all your questions, and give you some advice: - In general, you can expect some improvement over rtkfdk, but not a huge one - You can find the calculations in my PhD thesis https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the introduction is in French) - Adjusting the parameters is, in itself, a research topic (sorry !). Alpha controls the amount of regularization and only that (the higher, the more regularization). Beta, theoretically, should only change the convergence speed, provided you do an infinite number of iterations (I know it doesn't help, sorry again !). In practice, beta is ubiquitous and appears everywhere in the calculations, therefore it is hard to predict what effect an increase/decrease of beta will give on the images. I would keep it as is, and play on alpha - 3 iterations is way too little. I typically used 30 iterations. Using the CUDA forward and back projectors helped a lot maintain the computation time manageable - The quality of the results depends a lot on the nature of the image you are trying to reconstruct. In a nutshell, the algorithm assumes that the image you are reconstructing has a certain form of regularity, and discards the potential solutions that do not have it. This assumption partly compensates for the lack of data. ADMM TV assumes that the image you are reconstructing is piecewise constant, i.e. has large uniform areas separated by sharp borders. If your image is a phantom, it should give good results. If it is a real patient, you should probably change to another algorithm that assumes another form of regularity in the images (try rtkadmmwavelets) - You can find out whether you typical images can benefit from TV regularization by reconstructing from all projections with rtkfdk, then applying rtktotalvariationdenoising on the reconstructed volume (try 50 iterations and adjust the gamma parameter: high gamma means high regularization). If this denoising implies an unacceptable loss of quality, stay away from TV for these images, and try wavelets I hope this helps Looking forward to reading you again, Cyril On 12/12/2014 06:42 PM, Howard wrote: > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as > expected. In other words, it didn't show much improvement compared to > fdk reconstruction using the same sparse projection data. > The parameters I used in ADMMTV were the following: > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > while the fdk reconstruction parameters are: > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > The dimensions were chosen to include the entire anatomy. 72 > projections were selected out of 646 projections for a 360 degree scan > for both calculations. > What parameters and how can I adjust (like alpha, beta, or > iterations?) to improve the ADMMTV reconstruction? There is not much > description of this application from the wiki page. > Thanks, > -howard > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 09:49:07 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 09:49:07 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <548EA4E1.4090801@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks very much for your detailed and nice description on how to use the admmtv reconstruction. I followed your suggestions and re-ran reconstructions using admmtotalvariation and admmwavelets with cbct projection data from a thoracic patient. I am reporting what I found and hope these will give you information for further improvement. 1. I repeated admmtotalvariation with 30 iterations. No improvement was observed. As a matter of fact, the reconstructed image is getting a lot noiser compared to that using 3 iterations. The contrast is getting worse as well. I tried to play around with window & level in case I was fooled but apparently more iterations gave worse results. 2. Similarly I ran 30 iterations using admmwavelets. Slightly better reconstruction compared with total variation. 3. Then I went ahead to test if TV benefits us anything using the tvdenoising application on the fdk-reconstructed image reconstructed from full projection set. I found that the more iterations, the more blurry the image became. For example, with 50 iterations the contrast on the denoised image is very low so that the vertebrae and surrounding soft tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the image. With 5 iterations the denoising seems to work fairly well. Again, changing gamma's didn't make a difference. I hope I didn't misused the totalvariationdenoising application. The command I executed was: rtktotalvariationdenoising -i out.mha -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 In summary, tdmmwavelets seems perform better than tdmmtotalvariation but neither gave satisfactory results. No sure what we can infer from the TV denoising study. I could send my study to you if there is a need. Please let me know what tests I could run. Further help on improvement is definitely welcome and appreciated. -Howard On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not a huge > one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the > introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry !). > Alpha controls the amount of regularization and only that (the higher, the > more regularization). Beta, theoretically, should only change the > convergence speed, provided you do an infinite number of iterations (I know > it doesn't help, sorry again !). In practice, beta is ubiquitous and > appears everywhere in the calculations, therefore it is hard to predict > what effect an increase/decrease of beta will give on the images. I would > keep it as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. Using > the CUDA forward and back projectors helped a lot maintain the computation > time manageable > - The quality of the results depends a lot on the nature of the image you > are trying to reconstruct. In a nutshell, the algorithm assumes that the > image you are reconstructing has a certain form of regularity, and discards > the potential solutions that do not have it. This assumption partly > compensates for the lack of data. ADMM TV assumes that the image you are > reconstructing is piecewise constant, i.e. has large uniform areas > separated by sharp borders. If your image is a phantom, it should give good > results. If it is a real patient, you should probably change to another > algorithm that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, then > applying rtktotalvariationdenoising on the reconstructed volume (try 50 > iterations and adjust the gamma parameter: high gamma means high > regularization). If this denoising implies an unacceptable loss of quality, > stay away from TV for these images, and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: > > I am testing the ADMM total variation reconstruction with sparse data > sample. I could reconstruct but the results were not as good as expected. > In other words, it didn't show much improvement compared to fdk > reconstruction using the same sparse projection data. > > The parameters I used in ADMMTV were the following: > > --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 > > while the fdk reconstruction parameters are: > > --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 > > The dimensions were chosen to include the entire anatomy. 72 projections > were selected out of 646 projections for a 360 degree scan for both > calculations. > > What parameters and how can I adjust (like alpha, beta, or iterations?) to > improve the ADMMTV reconstruction? There is not much description of this > application from the wiki page. > > Thanks, > > -howard > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users > > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Wed Dec 17 10:19:05 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 17 Dec 2014 16:19:05 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> Message-ID: <54919EE9.3010406@creatis.insa-lyon.fr> Hi Howard, Thanks for the detailed feedback. The image getting blurry is typically due to a too high gamma. Depending on you data, gamma can have to be set to a very small value (I use 0.007 in some reconstructions on clinical data). Can you send over your volume reconstructed from full projection data, and I'll have a quick look ? There is a lot of instinct in the setting of the parameters. With time, one gets used to finding a correct set of parameters without really knowing how. I can also try to reconstruct from your cbct data if you send me the projections and the geometry. Best regards, Cyril On 12/17/2014 03:49 PM, Howard wrote: > Hi Cyril, > Thanks very much for your detailed and nice description on how to use > the admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > I am reporting what I found and hope these will give you information > for further improvement. > 1. I repeated admmtotalvariation with 30 iterations. No improvement > was observed. As a matter of fact, the reconstructed image is getting > a lot noiser compared to that using 3 iterations. The contrast is > getting worse as well. I tried to play around with window & level in > case I was fooled but apparently more iterations gave worse results. > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more > blurry the image became. For example, with 50 iterations the contrast > on the denoised image is very low so that the vertebrae and > surrounding soft tissue are hardly distinguishable. Changing > gamma's at 0.2, 0.5, 1.0, 10 did not seem to make a difference on the > image. With 5 iterations the denoising seems to work fairly well. > Again, changing gamma's didn't make a difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > In summary, tdmmwavelets seems perform better than tdmmtotalvariation > but neither gave satisfactory results. No sure what we can infer from > the TV denoising study. I could send my study to you if there is a > need. Please let me know what tests I could run. Further help on > improvement is definitely welcome and appreciated. > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory > > wrote: > > Hello Howard, > > Good to hear that you're using RTK :) > I'll try to answer all your questions, and give you some advice: > - In general, you can expect some improvement over rtkfdk, but not > a huge one > - You can find the calculations in my PhD thesis > https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only > the introduction is in French) > - Adjusting the parameters is, in itself, a research topic (sorry > !). Alpha controls the amount of regularization and only that (the > higher, the more regularization). Beta, theoretically, should only > change the convergence speed, provided you do an infinite number > of iterations (I know it doesn't help, sorry again !). In > practice, beta is ubiquitous and appears everywhere in the > calculations, therefore it is hard to predict what effect an > increase/decrease of beta will give on the images. I would keep it > as is, and play on alpha > - 3 iterations is way too little. I typically used 30 iterations. > Using the CUDA forward and back projectors helped a lot maintain > the computation time manageable > - The quality of the results depends a lot on the nature of the > image you are trying to reconstruct. In a nutshell, the algorithm > assumes that the image you are reconstructing has a certain form > of regularity, and discards the potential solutions that do not > have it. This assumption partly compensates for the lack of data. > ADMM TV assumes that the image you are reconstructing is piecewise > constant, i.e. has large uniform areas separated by sharp borders. > If your image is a phantom, it should give good results. If it is > a real patient, you should probably change to another algorithm > that assumes another form of regularity in the images (try > rtkadmmwavelets) > - You can find out whether you typical images can benefit from TV > regularization by reconstructing from all projections with rtkfdk, > then applying rtktotalvariationdenoising on the reconstructed > volume (try 50 iterations and adjust the gamma parameter: high > gamma means high regularization). If this denoising implies an > unacceptable loss of quality, stay away from TV for these images, > and try wavelets > > I hope this helps > > Looking forward to reading you again, > Cyril > > > On 12/12/2014 06:42 PM, Howard wrote: >> I am testing the ADMM total variation reconstruction with sparse >> data sample. I could reconstruct but the results were not as good >> as expected. In other words, it didn't show much improvement >> compared to fdk reconstruction using the same sparse projection >> data. >> The parameters I used in ADMMTV were the following: >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> while the fdk reconstruction parameters are: >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> The dimensions were chosen to include the entire anatomy. 72 >> projections were selected out of 646 projections for a 360 degree >> scan for both calculations. >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not >> much description of this application from the wiki page. >> Thanks, >> -howard >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> http://public.kitware.com/mailman/listinfo/rtk-users > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomahu at gmail.com Wed Dec 17 11:02:41 2014 From: lomahu at gmail.com (Howard) Date: Wed, 17 Dec 2014 11:02:41 -0500 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: <54919EE9.3010406@creatis.insa-lyon.fr> References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: Hi Cyril, I've sent you two files via wetransfer.com: one is the sparse projection set with geometry file and the other is the fdk reconstructed image based on full projection set. Please let me know if you have trouble receiving them. Thanks very much for looking into this. -Howard On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory < cyril.mory at creatis.insa-lyon.fr> wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. Depending > on you data, gamma can have to be set to a very small value (I use 0.007 in > some reconstructions on clinical data). Can you send over your volume > reconstructed from full projection data, and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With time, > one gets used to finding a correct set of parameters without really knowing > how. I can also try to reconstruct from your cbct data if you send me the > projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: > > Hi Cyril, > > Thanks very much for your detailed and nice description on how to use the > admmtv reconstruction. I followed your suggestions and re-ran > reconstructions using admmtotalvariation and admmwavelets with cbct > projection data from a thoracic patient. > > I am reporting what I found and hope these will give you information for > further improvement. > > 1. I repeated admmtotalvariation with 30 iterations. No improvement was > observed. As a matter of fact, the reconstructed image is getting a lot > noiser compared to that using 3 iterations. The contrast is getting worse > as well. I tried to play around with window & level in case I was fooled > but apparently more iterations gave worse results. > > 2. Similarly I ran 30 iterations using admmwavelets. Slightly better > reconstruction compared with total variation. > > 3. Then I went ahead to test if TV benefits us anything using the > tvdenoising application on the fdk-reconstructed image reconstructed > from full projection set. I found that the more iterations, the more blurry > the image became. For example, with 50 iterations the contrast on the > denoised image is very low so that the vertebrae and surrounding soft > tissue are hardly distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 > did not seem to make a difference on the image. With 5 iterations the > denoising seems to work fairly well. Again, changing gamma's didn't make a > difference. > I hope I didn't misused the totalvariationdenoising application. The > command I executed was: rtktotalvariationdenoising -i out.mha -o > out_denoising_n50_gamma05 --gamma 0.5 -n 50 > > In summary, tdmmwavelets seems perform better than tdmmtotalvariation but > neither gave satisfactory results. No sure what we can infer from the TV > denoising study. I could send my study to you if there is a need. Please > let me know what tests I could run. Further help on improvement is > definitely welcome and appreciated. > > -Howard > > On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory < > cyril.mory at creatis.insa-lyon.fr> wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, but not a huge >> one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. Only the >> introduction is in French) >> - Adjusting the parameters is, in itself, a research topic (sorry !). >> Alpha controls the amount of regularization and only that (the higher, the >> more regularization). Beta, theoretically, should only change the >> convergence speed, provided you do an infinite number of iterations (I know >> it doesn't help, sorry again !). In practice, beta is ubiquitous and >> appears everywhere in the calculations, therefore it is hard to predict >> what effect an increase/decrease of beta will give on the images. I would >> keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 iterations. Using >> the CUDA forward and back projectors helped a lot maintain the computation >> time manageable >> - The quality of the results depends a lot on the nature of the image you >> are trying to reconstruct. In a nutshell, the algorithm assumes that the >> image you are reconstructing has a certain form of regularity, and discards >> the potential solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the image you are >> reconstructing is piecewise constant, i.e. has large uniform areas >> separated by sharp borders. If your image is a phantom, it should give good >> results. If it is a real patient, you should probably change to another >> algorithm that assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit from TV >> regularization by reconstructing from all projections with rtkfdk, then >> applying rtktotalvariationdenoising on the reconstructed volume (try 50 >> iterations and adjust the gamma parameter: high gamma means high >> regularization). If this denoising implies an unacceptable loss of quality, >> stay away from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >> >> I am testing the ADMM total variation reconstruction with sparse data >> sample. I could reconstruct but the results were not as good as expected. >> In other words, it didn't show much improvement compared to fdk >> reconstruction using the same sparse projection data. >> >> The parameters I used in ADMMTV were the following: >> >> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta 1000 -n 3 >> >> while the fdk reconstruction parameters are: >> >> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >> >> The dimensions were chosen to include the entire anatomy. 72 projections >> were selected out of 646 projections for a 360 degree scan for both >> calculations. >> >> What parameters and how can I adjust (like alpha, beta, or >> iterations?) to improve the ADMMTV reconstruction? There is not much >> description of this application from the wiki page. >> >> Thanks, >> >> -howard >> >> >> >> _______________________________________________ >> Rtk-users mailing listRtk-users at public.kitware.comhttp://public.kitware.com/mailman/listinfo/rtk-users >> >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile: +33 6 69 46 73 79 >> >> > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile: +33 6 69 46 73 79 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.mory at creatis.insa-lyon.fr Thu Dec 18 05:13:15 2014 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Thu, 18 Dec 2014 11:13:15 +0100 Subject: [Rtk-users] ADMMTVReconstruction In-Reply-To: References: <548EA4E1.4090801@creatis.insa-lyon.fr> <54919EE9.3010406@creatis.insa-lyon.fr> Message-ID: <5492A8BB.2030209@creatis.insa-lyon.fr> Hi Howard, I've taken a look at your data. You can apply tv denoising on the out.mha volume and obtain a significantly lower level of noise without blurring structures by using the following command : rtktotalvariationdenoising -i out.mha -g 0.001 -o tvdenoised/gamma0.001.mha -n 100 I was unable to obtain good results with iterative reconstruction from the projection data you sent, though. I think the main reason for this is that your projections have much-higher-than-zero attenuation in air. Your calculation of i0 when converting from intensity to attenuation is probably not good enough. Try to correct for this effect first. Then you can start performing SART and Conjugate Gradient reconstructions on your data, and once you get these right, play with ADMM. You might need to remove the table from the projections to be able to restrict the reconstruction volume strictly to the patient, and speed up the computations. We can provide help for that too. Best regards, Cyril On 12/17/2014 05:02 PM, Howard wrote: > Hi Cyril, > I've sent you two files via wetransfer.com : > one is the sparse projection set with geometry file and the other is > the fdk reconstructed image based on full projection set. Please let > me know if you have trouble receiving them. > Thanks very much for looking into this. > -Howard > > On Wed, Dec 17, 2014 at 10:19 AM, Cyril Mory > > wrote: > > Hi Howard, > > Thanks for the detailed feedback. > The image getting blurry is typically due to a too high gamma. > Depending on you data, gamma can have to be set to a very small > value (I use 0.007 in some reconstructions on clinical data). Can > you send over your volume reconstructed from full projection data, > and I'll have a quick look ? > > There is a lot of instinct in the setting of the parameters. With > time, one gets used to finding a correct set of parameters without > really knowing how. I can also try to reconstruct from your cbct > data if you send me the projections and the geometry. > > Best regards, > Cyril > > > On 12/17/2014 03:49 PM, Howard wrote: >> Hi Cyril, >> Thanks very much for your detailed and nice description on how to >> use the admmtv reconstruction. I followed your suggestions and >> re-ran reconstructions using admmtotalvariation and admmwavelets >> with cbct projection data from a thoracic patient. >> I am reporting what I found and hope these will give you >> information for further improvement. >> 1. I repeated admmtotalvariation with 30 iterations. No >> improvement was observed. As a matter of fact, the reconstructed >> image is getting a lot noiser compared to that using 3 >> iterations. The contrast is getting worse as well. I tried to >> play around with window & level in case I was fooled but >> apparently more iterations gave worse results. >> 2. Similarly I ran 30 iterations using admmwavelets. Slightly >> better reconstruction compared with total variation. >> 3. Then I went ahead to test if TV benefits us anything using the >> tvdenoising application on the fdk-reconstructed >> image reconstructed from full projection set. I found that the >> more iterations, the more blurry the image became. For example, >> with 50 iterations the contrast on the denoised image is very low >> so that the vertebrae and surrounding soft tissue are hardly >> distinguishable. Changing gamma's at 0.2, 0.5, 1.0, 10 did not >> seem to make a difference on the image. With 5 iterations the >> denoising seems to work fairly well. Again, changing gamma's >> didn't make a difference. >> I hope I didn't misused the totalvariationdenoising application. >> The command I executed was: rtktotalvariationdenoising -i out.mha >> -o out_denoising_n50_gamma05 --gamma 0.5 -n 50 >> In summary, tdmmwavelets seems perform better than >> tdmmtotalvariation but neither gave satisfactory results. No sure >> what we can infer from the TV denoising study. I could send my >> study to you if there is a need. Please let me know what tests I >> could run. Further help on improvement is definitely welcome and >> appreciated. >> -Howard >> >> On Mon, Dec 15, 2014 at 4:07 AM, Cyril Mory >> > > wrote: >> >> Hello Howard, >> >> Good to hear that you're using RTK :) >> I'll try to answer all your questions, and give you some advice: >> - In general, you can expect some improvement over rtkfdk, >> but not a huge one >> - You can find the calculations in my PhD thesis >> https://tel.archives-ouvertes.fr/tel-00985728 (in English. >> Only the introduction is in French) >> - Adjusting the parameters is, in itself, a research topic >> (sorry !). Alpha controls the amount of regularization and >> only that (the higher, the more regularization). Beta, >> theoretically, should only change the convergence speed, >> provided you do an infinite number of iterations (I know it >> doesn't help, sorry again !). In practice, beta is ubiquitous >> and appears everywhere in the calculations, therefore it is >> hard to predict what effect an increase/decrease of beta will >> give on the images. I would keep it as is, and play on alpha >> - 3 iterations is way too little. I typically used 30 >> iterations. Using the CUDA forward and back projectors helped >> a lot maintain the computation time manageable >> - The quality of the results depends a lot on the nature of >> the image you are trying to reconstruct. In a nutshell, the >> algorithm assumes that the image you are reconstructing has a >> certain form of regularity, and discards the potential >> solutions that do not have it. This assumption partly >> compensates for the lack of data. ADMM TV assumes that the >> image you are reconstructing is piecewise constant, i.e. has >> large uniform areas separated by sharp borders. If your image >> is a phantom, it should give good results. If it is a real >> patient, you should probably change to another algorithm that >> assumes another form of regularity in the images (try >> rtkadmmwavelets) >> - You can find out whether you typical images can benefit >> from TV regularization by reconstructing from all projections >> with rtkfdk, then applying rtktotalvariationdenoising on the >> reconstructed volume (try 50 iterations and adjust the gamma >> parameter: high gamma means high regularization). If this >> denoising implies an unacceptable loss of quality, stay away >> from TV for these images, and try wavelets >> >> I hope this helps >> >> Looking forward to reading you again, >> Cyril >> >> >> On 12/12/2014 06:42 PM, Howard wrote: >>> I am testing the ADMM total variation reconstruction with >>> sparse data sample. I could reconstruct but the results were >>> not as good as expected. In other words, it didn't show much >>> improvement compared to fdk reconstruction using the same >>> sparse projection data. >>> The parameters I used in ADMMTV were the following: >>> --spacing 2,2,2 --dimension 250,100,250 --alpha 1 --beta >>> 1000 -n 3 >>> while the fdk reconstruction parameters are: >>> --spacing 2,2,2 --dimension 250,100,250 --pad 0.1 --hann 0.5 >>> The dimensions were chosen to include the entire anatomy. 72 >>> projections were selected out of 646 projections for a 360 >>> degree scan for both calculations. >>> What parameters and how can I adjust (like alpha, beta, or >>> iterations?) to improve the ADMMTV reconstruction? There is >>> not much description of this application from the wiki page. >>> Thanks, >>> -howard >>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> http://public.kitware.com/mailman/listinfo/rtk-users >> >> -- >> -- >> Cyril Mory, Post-doc >> CREATIS >> Leon Berard cancer treatment center >> 28 rue La?nnec >> 69373 Lyon cedex 08 FRANCE >> >> Mobile:+33 6 69 46 73 79 >> > > -- > -- > Cyril Mory, Post-doc > CREATIS > Leon Berard cancer treatment center > 28 rue La?nnec > 69373 Lyon cedex 08 FRANCE > > Mobile:+33 6 69 46 73 79 > -- -- Cyril Mory, Post-doc CREATIS Leon Berard cancer treatment center 28 rue La?nnec 69373 Lyon cedex 08 FRANCE Mobile: +33 6 69 46 73 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Dec 24 06:22:37 2014 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 24 Dec 2014 12:22:37 +0100 Subject: [Rtk-users] Tiff lookup table question Message-ID: Hi everyone, Merry Christmas! I have some minor questions about the tiff lookup table for converting tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found the table a little bit strange. Taking 8 bit unsigned integer tiff pixels as an example. 1) The reference value will be log(257), 2) pixel value p=0 is no attenuation, and 3) for 1<=p<=255 the attenuation is reference - log(p+1). Therefore the table looks like: p attenuation 0 0, or log(257)-log(257) 1 log(257)-log(2) 2 log(257)-log(3) 3 log(257)-log(4) ... 254 log(257)-log(255) 255 log(257)-log(256) My questions are: Why is p=0 treated differently? Is this an industrial standard? For pixel values from 1 to 255, why is the attenuation log(257)-log(p+1), not log(256)-log(p)? Thanks and best regards, Chao From simon.rit at creatis.insa-lyon.fr Wed Dec 24 08:29:49 2014 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 24 Dec 2014 14:29:49 +0100 Subject: [Rtk-users] Tiff lookup table question In-Reply-To: References: Message-ID: Hi Chao, Good question. I can't remember exactly but looking at the test data, the image ExternalData/testing/Data/Input/Digisens/ima0010.tif has 0 values at the top border which is probably why I did this since border is next to air. Don't hesitate to build your own tiff LUT if you'd prefer maximum attenuation for 0 values. If you want it in RTK, maybe we can check for a specific tag in the TIFF file and do a specific treatment for your scanner. Good luck! Simon On Wed, Dec 24, 2014 at 12:22 PM, Chao Wu wrote: > Hi everyone, Merry Christmas! > > I have some minor questions about the tiff lookup table for converting > tiff values to attenuation in rtkTiffLookupTableImageFilter.h. I found > the table a little bit strange. Taking 8 bit unsigned integer tiff > pixels as an example. > 1) The reference value will be log(257), > 2) pixel value p=0 is no attenuation, and > 3) for 1<=p<=255 the attenuation is reference - log(p+1). > > Therefore the table looks like: > p attenuation > 0 0, or log(257)-log(257) > 1 log(257)-log(2) > 2 log(257)-log(3) > 3 log(257)-log(4) > ... > 254 log(257)-log(255) > 255 log(257)-log(256) > > My questions are: > Why is p=0 treated differently? Is this an industrial standard? > For pixel values from 1 to 255, why is the attenuation > log(257)-log(p+1), not log(256)-log(p)? > > Thanks and best regards, > Chao > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > http://public.kitware.com/mailman/listinfo/rtk-users