From vl at xris.eu Fri Feb 9 09:07:56 2018 From: vl at xris.eu (Vincent Libertiaux) Date: Fri, 9 Feb 2018 15:07:56 +0100 Subject: [Rtk-users] RTK on multiple GPU's Message-ID: <436b3f75-0eab-67b1-b3b0-e3d25cc4cd3d@xris.eu> Hello Simon, hello everyone. I have a computer that I would like to use as a CT reconstruction server for several users, let's say 3.? I was wondering if it is possible to run 3 simultaneous reconstructions with RTK, one on each GPU ? I thank you very much in advance for any answer or clue. Best regards, Vincent From riblettmj at mymail.vcu.edu Fri Feb 9 14:06:45 2018 From: riblettmj at mymail.vcu.edu (Matthew Joseph Riblett) Date: Fri, 9 Feb 2018 14:06:45 -0500 Subject: [Rtk-users] RTK on multiple GPU's In-Reply-To: <436b3f75-0eab-67b1-b3b0-e3d25cc4cd3d@xris.eu> References: <436b3f75-0eab-67b1-b3b0-e3d25cc4cd3d@xris.eu> Message-ID: Vincent, This should be possible ? you may have to tell the application what GPU you would like it to use by setting an environment variable. I run on a machine with four GPUs and can individually select which one I want to use by calling ?CUDA_VISIBLE_DEVICES=0 rtkfdk ...? or ?CUDA_VISIBLE_DEVICES=1 rtkfdk ?? which exposes the target GPU to the running application. This is a very manual, command line approach, however the CUDA_VISIBLE_DEVICES variable can also be set to a unique value for each user in their respective user profile (e.g. setting ?CUDA_VISIBLE_DEVICES=1? for USER1 in ~/.bashrc on a Linux machine). This second approach makes it so that each user has a dedicated GPU at their disposal whenever they need it. Hope this helps. ? Matt __ Matthew J. Riblett Virginia Commonwealth University Department of Radiation Oncology Medical Physics Graduate Program Office: Sanger Hall, Room B1-013 401 College Street | P.O. Box 980058 Richmond, Virginia 23298 > On Feb 9, 2018, at 9:07 AM, Vincent Libertiaux wrote: > > Hello Simon, > > hello everyone. > > I have a computer that I would like to use as a CT reconstruction server for several users, let's say 3. I was wondering if it is possible to run 3 simultaneous reconstructions with RTK, one on each GPU ? > > I thank you very much in advance for any answer or clue. > > Best regards, > > Vincent > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielx.lin at gmail.com Sun Feb 11 02:18:37 2018 From: danielx.lin at gmail.com (Daniel Xun Lin) Date: Sun, 11 Feb 2018 02:18:37 -0500 Subject: [Rtk-users] CUDA Forward Projection Intensity Variance with Geometry Shift Message-ID: Hi, I am using the CUDAForwardProjection functionality of RTK through the SimpleRTK wrapper. I am trying to generate DRRs that are realistic and simulates fluoroscopic images of the structure and thus far I've setup the image spacing and pixel based on the dimensions of the C-arm that was used to acquire the CT and the fluoroscopic images and have gotten good results when the detector is simulated to have all angles at 0. However, when I attempt to simulate movement of the model (by changing the X/Y coordinate on the projection image of isocenter or by modifying the X/Y source offset), the overall intensity of my projection image sees a drastic drop. It would be greatly appreciated if anyone could advise on whether this may be an issue of me incorrectly setting my ConstantImageSource or if I am simulating the offset of my structures incorrectly. I apologize if I am vague on any points or if anything is unclear, please let me know. Thank you, Daniel Xun Lin -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Mon Feb 12 01:55:50 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Mon, 12 Feb 2018 07:55:50 +0100 Subject: [Rtk-users] CUDA Forward Projection Intensity Variance with Geometry Shift In-Reply-To: References: Message-ID: Hi Daniel, I don't see how moving the source and the detector could trigger a drastic drop in intensities. If you can send two geometry files (before and after the drop), we can try to have a look. Best regards, Simon On Sun, Feb 11, 2018 at 8:18 AM, Daniel Xun Lin wrote: > Hi, > > I am using the CUDAForwardProjection functionality of RTK through the > SimpleRTK wrapper. I am trying to generate DRRs that are realistic and > simulates fluoroscopic images of the structure and thus far I've setup the > image spacing and pixel based on the dimensions of the C-arm that was used > to acquire the CT and the fluoroscopic images and have gotten good results > when the detector is simulated to have all angles at 0. > > However, when I attempt to simulate movement of the model (by changing the > X/Y coordinate on the projection image of isocenter or by modifying the X/Y > source offset), the overall intensity of my projection image sees a drastic > drop. > > It would be greatly appreciated if anyone could advise on whether this may > be an issue of me incorrectly setting my ConstantImageSource or if I am > simulating the offset of my structures incorrectly. > > I apologize if I am vague on any points or if anything is unclear, please > let me know. > > Thank you, > Daniel Xun Lin > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at xris.eu Mon Feb 12 05:37:16 2018 From: vl at xris.eu (Vincent Libertiaux) Date: Mon, 12 Feb 2018 11:37:16 +0100 Subject: [Rtk-users] RTK on multiple GPU's In-Reply-To: References: <436b3f75-0eab-67b1-b3b0-e3d25cc4cd3d@xris.eu> Message-ID: Hi Matt, Thank you very much for your input.? I will try your approach as soon as I receive the graphics card I have ordered. Best regards, Vincent From anais.capouillez at student.uliege.be Wed Feb 14 07:47:54 2018 From: anais.capouillez at student.uliege.be (anais.capouillez at student.uliege.be) Date: Wed, 14 Feb 2018 13:47:54 +0100 (CET) Subject: [Rtk-users] Reducing the pattern of errors Message-ID: <1632925457.12060832.1518612474843.JavaMail.zimbra@student.uliege.be> Hi, When I use FDK to reconstruct my volume, there are some parts inside the volume where the error is up to 10%, even in the middle of my volume. When I compute the difference between the phantom and the reconstructed object I can see some patterns of small lines where the error is way bigger than in the rest of the volume. I need to reconstruct my volume with at least one big area with no error bigger than few percents of error for each voxel of the area. The patterns of errors occur too often to select an area sufficiently big. Unfortunately, I cannot change the number of projections I use because this is imposed to me. Therefore, I want to know if it is possible to obtain better results with FDK or if I have to use an iterative algorithm. For the parameters of the geometry, I used 180 projections, sdd=978.5, and sid=478.5. For the projections I used a spacing of 0.8 and a dimension of 1024. And for the reconstruction, I used a spacing of 0.5, and a dimension of 204*404*204. I joined two screenshots of the absolute difference between the phantom and the reconstruction (one zoomed on some of the small lines of error, and another one not zoomed). If you want the images of my phantom, the reconstruction, and the absolute difference between the two (with the actual values and with relative values), I uploaded them here: https://drive.google.com/drive/folders/194k2CDomeLlmVxybTllSKWYhpCZyPvVx?usp=sharing Thank you. Ana?s -------------- next part -------------- A non-text attachment was scrubbed... Name: Difference.png Type: image/png Size: 225282 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ZoomedDifference.png Type: image/png Size: 96541 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Wed Feb 14 10:30:49 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 14 Feb 2018 16:30:49 +0100 Subject: [Rtk-users] Reducing the pattern of errors In-Reply-To: <1632925457.12060832.1518612474843.JavaMail.zimbra@student.uliege.be> References: <1632925457.12060832.1518612474843.JavaMail.zimbra@student.uliege.be> Message-ID: Hi, These are numerical errors. How did you create the projections? It seems that you have used the voxelized phantom to simulate the projections. This enhances strongly these artefacts. This is why most people compute simulated projections analytically with simple analytical shapes (as the Shepp Logan phantom). If you share your projections meta information, I can illustrate how to do the simulation differently. If you already used analytical simulations, a way to improve the quality is to increase the sampling (use more and finer pixels in the projections). If you want to reduce the artefacts from these projections without modifying the projections sampling, you need to remove some high frequencies with a proper windowing. You can try --hann 1 for example on the rtkfdk command line. I hope this helps, Simon On Wed, Feb 14, 2018 at 1:47 PM, wrote: > Hi, > > When I use FDK to reconstruct my volume, there are some parts inside the > volume where the error is up to 10%, even in the middle of my volume. When > I compute the difference between the phantom and the reconstructed object I > can see some patterns of small lines where the error is way bigger than in > the rest of the volume. > I need to reconstruct my volume with at least one big area with no error > bigger than few percents of error for each voxel of the area. The patterns > of errors occur too often to select an area sufficiently big. > Unfortunately, I cannot change the number of projections I use because > this is imposed to me. > > Therefore, I want to know if it is possible to obtain better results with > FDK or if I have to use an iterative algorithm. > > > > For the parameters of the geometry, I used 180 projections, sdd=978.5, and > sid=478.5. For the projections I used a spacing of 0.8 and a dimension of > 1024. And for the reconstruction, I used a spacing of 0.5, and a dimension > of 204*404*204. > > I joined two screenshots of the absolute difference between the phantom > and the reconstruction (one zoomed on some of the small lines of error, and > another one not zoomed). > > > If you want the images of my phantom, the reconstruction, and the absolute > difference between the two (with the actual values and with relative > values), I uploaded them here: https://drive.google.com/drive/folders/ > 194k2CDomeLlmVxybTllSKWYhpCZyPvVx?usp=sharing > > > Thank you. > > Ana?s > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anais.capouillez at student.uliege.be Wed Feb 14 11:04:23 2018 From: anais.capouillez at student.uliege.be (anais.capouillez at student.uliege.be) Date: Wed, 14 Feb 2018 17:04:23 +0100 (CET) Subject: [Rtk-users] Reducing the pattern of errors In-Reply-To: References: <1632925457.12060832.1518612474843.JavaMail.zimbra@student.uliege.be> Message-ID: <947080723.12215685.1518624263809.JavaMail.zimbra@student.uliege.be> Thank you very much. This is probably due to the resolution I use then. I cannot use a better resolution for now because there is not enough memory when I use Cuda (but I will work with a better graphic card soon). Best regards. Ana?s ----- Mail original ----- De: "Simon Rit" ?: "anais capouillez" Cc: rtk-users at public.kitware.com Envoy?: Mercredi 14 F?vrier 2018 16:30:49 Objet: Re: [Rtk-users] Reducing the pattern of errors Hi, These are numerical errors. How did you create the projections? It seems that you have used the voxelized phantom to simulate the projections. This enhances strongly these artefacts. This is why most people compute simulated projections analytically with simple analytical shapes (as the Shepp Logan phantom). If you share your projections meta information, I can illustrate how to do the simulation differently. If you already used analytical simulations, a way to improve the quality is to increase the sampling (use more and finer pixels in the projections). If you want to reduce the artefacts from these projections without modifying the projections sampling, you need to remove some high frequencies with a proper windowing. You can try --hann 1 for example on the rtkfdk command line. I hope this helps, Simon On Wed, Feb 14, 2018 at 1:47 PM, wrote: > Hi, > > When I use FDK to reconstruct my volume, there are some parts inside the > volume where the error is up to 10%, even in the middle of my volume. When > I compute the difference between the phantom and the reconstructed object I > can see some patterns of small lines where the error is way bigger than in > the rest of the volume. > I need to reconstruct my volume with at least one big area with no error > bigger than few percents of error for each voxel of the area. The patterns > of errors occur too often to select an area sufficiently big. > Unfortunately, I cannot change the number of projections I use because > this is imposed to me. > > Therefore, I want to know if it is possible to obtain better results with > FDK or if I have to use an iterative algorithm. > > > > For the parameters of the geometry, I used 180 projections, sdd=978.5, and > sid=478.5. For the projections I used a spacing of 0.8 and a dimension of > 1024. And for the reconstruction, I used a spacing of 0.5, and a dimension > of 204*404*204. > > I joined two screenshots of the absolute difference between the phantom > and the reconstruction (one zoomed on some of the small lines of error, and > another one not zoomed). > > > If you want the images of my phantom, the reconstruction, and the absolute > difference between the two (with the actual values and with relative > values), I uploaded them here: https://drive.google.com/drive/folders/ > 194k2CDomeLlmVxybTllSKWYhpCZyPvVx?usp=sharing > > > Thank you. > > Ana?s > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > From muhammad.s.waqar at gmail.com Sun Feb 18 19:01:25 2018 From: muhammad.s.waqar at gmail.com (muhammad waqar) Date: Sun, 18 Feb 2018 19:01:25 -0500 Subject: [Rtk-users] Hounsfield Unit Conversion - New User Message-ID: Hello all, I'm a new user to ITK/RTK and I'm having a few issues: Following the ElektaReconstruction steps on the wiki, I was able to reconstruct my scan. However, the window and level are still in attenuation coefficients. How can I convert my scan to Hounsfield Units? Whenever I put in any value for --wpc in RTKFDK (I've tried values from 0.015-150) my recon results in a 'blank' as is attached below. Any help in resolving this would be great. Kindly, -Waqar Muhammad Carleton University ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-18 at 6.55.27 PM.png Type: image/png Size: 225874 bytes Desc: not available URL: From simon.rit at creatis.insa-lyon.fr Mon Feb 19 01:39:40 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Mon, 19 Feb 2018 07:39:40 +0100 Subject: [Rtk-users] Hounsfield Unit Conversion - New User In-Reply-To: References: Message-ID: Hi, To convert to HU, you need to measure a phantom with water and air and to apply the formula for HU conversion (see wikipedia ). Note that due to scatter in (Elekta) cone-beam CTs, you will have difficulties in doing this conversion due to strong artifacts. The --wpc coefficients can allow you to apply the slope of this formula but the intercept must be subtracted after reconstruction. We illustrate here how to use wpc. Good luck, Simon On Mon, Feb 19, 2018 at 1:01 AM, muhammad waqar wrote: > Hello all, > > I'm a new user to ITK/RTK and I'm having a few issues: > > Following the ElektaReconstruction steps on the wiki, I was able to > reconstruct my scan. However, the window and level are still in attenuation > coefficients. How can I convert my scan to Hounsfield Units? > > Whenever I put in any value for --wpc in RTKFDK (I've tried values from > 0.015-150) my recon results in a 'blank' as is attached below. > > Any help in resolving this would be great. > > Kindly, > -Waqar Muhammad > Carleton University > > > ? > ? > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-18 at 6.55.27 PM.png Type: image/png Size: 225874 bytes Desc: not available URL: From wuchao04 at gmail.com Mon Feb 19 06:00:13 2018 From: wuchao04 at gmail.com (Chao Wu) Date: Mon, 19 Feb 2018 12:00:13 +0100 Subject: [Rtk-users] Hounsfield Unit Conversion - New User In-Reply-To: References: Message-ID: And be aware that --wpc receives polynomial coefficients, and if you only specify one number this will become the constant term only, which is probably not your intent. Regards, Chao 2018-02-19 7:39 GMT+01:00 Simon Rit : > Hi, > To convert to HU, you need to measure a phantom with water and air and to > apply the formula for HU conversion (see wikipedia > ). Note that due to > scatter in (Elekta) cone-beam CTs, you will have difficulties in doing this > conversion due to strong artifacts. > The --wpc coefficients can allow you to apply the slope of this formula > but the intercept must be subtracted after reconstruction. We illustrate > here how to use > wpc. > Good luck, > Simon > > On Mon, Feb 19, 2018 at 1:01 AM, muhammad waqar < > muhammad.s.waqar at gmail.com> wrote: > >> Hello all, >> >> I'm a new user to ITK/RTK and I'm having a few issues: >> >> Following the ElektaReconstruction steps on the wiki, I was able to >> reconstruct my scan. However, the window and level are still in attenuation >> coefficients. How can I convert my scan to Hounsfield Units? >> >> Whenever I put in any value for --wpc in RTKFDK (I've tried values from >> 0.015-150) my recon results in a 'blank' as is attached below. >> >> Any help in resolving this would be great. >> >> Kindly, >> -Waqar Muhammad >> Carleton University >> >> >> ? >> ? >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> https://public.kitware.com/mailman/listinfo/rtk-users >> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-18 at 6.55.27 PM.png Type: image/png Size: 225874 bytes Desc: not available URL: From lotte.schyns at maastro.nl Tue Feb 20 09:14:06 2018 From: lotte.schyns at maastro.nl (Lotte Schyns) Date: Tue, 20 Feb 2018 14:14:06 +0000 Subject: [Rtk-users] RTK Scatter Correction Message-ID: <12be0887-19fc-388d-6f75-80b79c72e19d@maastro.nl> Hello, I'm investigating the possibility to perform scatter corrections in RTK. From what I could find, there seem to be two options? (correct me if I'm wrong): 1) rtk::BoellaardScatterCorrectionImageFilter (Boellaard paper) 2) rtk::ScatterGlareCorrectionImageFilter (Poludniowski paper) Since both methods are based on a deconvolution approach using the edge-spread function, I was wondering what the difference in implementation is and in which cases one method would be preferred over the other. Lotte From simon.rit at creatis.insa-lyon.fr Wed Feb 21 02:50:24 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 21 Feb 2018 08:50:24 +0100 Subject: [Rtk-users] RTK Scatter Correction In-Reply-To: <12be0887-19fc-388d-6f75-80b79c72e19d@maastro.nl> References: <12be0887-19fc-388d-6f75-80b79c72e19d@maastro.nl> Message-ID: Dear Lotte, These two options address different issues. The first one deals with the patient scatter and the second one with the detector glare which might come from the detector scatter (but maybe not (only)). See the work of, e.g., Poludniowski to understand the difference dx.doi.org/10.1088/0031-9155/54/12/016 dx.doi.org/10.1088/0031-9155/56/6/019 You'll see that the second paper suggests that both must be corrected. I haven't used those implementations a lot but I think the detector glare follows closely Poludniowski's paper but Boellaard does not. Boellaard implementation is the simplest option when only a constant is subtracted and I actually believe that it's not working properly (but this would need to be checked, it's been on my todo list for a long while). There are other options: - auto-detection of I0 in air does something similar to Boellaard's but differently http://www.openrtk.org/Doxygen/classrtk_1_1I0EstimationProjectionFilter.html - empirical cupping correction: https://doi.org/10.1118/1.2188076 - Monte Carlo correction if you have a CT image of your object. The tools to do this are open source, see https://doi.org/10.1016/j.phro.2017.09.002 I hope this helps, Simon On Tue, Feb 20, 2018 at 3:14 PM, Lotte Schyns wrote: > Hello, > > I'm investigating the possibility to perform scatter corrections in RTK. > From what I could find, there seem to be two options (correct me if I'm > wrong): > > 1) rtk::BoellaardScatterCorrectionImageFilter (Boellaard paper) > 2) rtk::ScatterGlareCorrectionImageFilter (Poludniowski paper) > > Since both methods are based on a deconvolution approach using the > edge-spread function, I was wondering what the difference in > implementation is and in which cases one method would be preferred over > the other. > > Lotte > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Wed Feb 21 04:28:59 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Wed, 21 Feb 2018 10:28:59 +0100 Subject: [Rtk-users] Release of RTK v1.4.0 Message-ID: Dear RTK users, RTK v1.4.0 has just been released, about 17 months after RTK v1.3.0. This should be the last release before a more significant RTK re-factoring to become an ITK external module (see RTK-ExternalModule GitHub branch ). Release notes: * Many SimpleRTK improvements, including a large update based on SimpleITK 0.10.0 * Continuous integration: improved TravisCI, added CircleCI and AppVeyorCI * Added ITK style git hooks. * Improvement of geometric phantoms, added Forbild capability * Geometry for Bioscan NanoSPECT/CT * Better description of geometry, with doxygen drawings * Parallel geometry for rtk::JosephForwardProjectionImageFilter * Support for cylindrical detectors: - limited to iterative - not using displaced detectors - ray based projectors can handle any radius - voxel-based back projectors (CPU and CUDA) can only handle source-centered cylindrical detectors * Several methods for spectral CT reconstruction: - forward model - material decomposition of projections - aberrant pixels removal - regularized multi-channel reconstruction (nuclear TV regularization) * Geometry options for jaw collimation * Ray-based iterators to seamlessly handle all rays of a stack of projections * Updates on Varian Xim geometry reading and projections processing * Updates on medPhoton Ora reading and projections processing * Varian HNC reader * Method to generate an RTK geometry from calibration matrices Many thanks to all contributors, in alphabetical order for this release: Ali Uneri, Andreas Gravgaard Andersen, Bernhard Froehler, Brent van der Heyden, Cyril Mory, David K?gler, Fabien Momey, Hans Johnson, Jerome Lesaint, Julien Jomier, Kiran Joshi, Lotte Schyns, Lucas Gandel, S?bastien Brousmiche, Simon Rit, Thibault Notargiacomo and Thomas Baudier. As usual, be aware that we don't focus on releases since we have a public github repository that we try to keep stable. I still recommend the use of the master HEAD over releases to enjoy the new RTK developments before their release. We still have a few on-going projects for which we will use and enhance RTK. Simon (for the RTK consortium) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Feb 21 06:57:58 2018 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 21 Feb 2018 12:57:58 +0100 Subject: [Rtk-users] Fwd: High pixel/voxel values in SART In-Reply-To: References: Message-ID: L.S., I was working on FDK in the past and interative reconstruction methods are still new to me. I understand the concept of iteratvie methods but are not aware of technical details in implementation. Recently I am trying SART but got streak artefacts in reconstructed slices, as well as dots with very high value (both negative and positive) at corners of slices. When I checked intermediate images in the pipleline I found that those are introduced in itk::DivideOrZeroOutImageFilter. You can see from the attached picture: the left half shows the output of rtk::RayBoxIntersectionImageFilter and the right half the output of itk::DivideOrZeroOutImageFilter, both during processing of the first projection in the first iteration. Apparently, although it contains the whole object, my volume is relatively small compared to the size of the detector images. Then the rays intersecting the volume near corners and edges result in small values in the output of the raybox filter, and subsequently magnify the pixel values largely after division. This may not be a problem if the detector images are noiseless, but in practice this will magnify the noise and they will stay as streaks and dots in slices. To correct for this I have something in mind, such as making the volume bigger and cropping the detector images so that corners and edges of the volume do not project to the cropped detector; or increasing the threshold in the divide filter so that low values from edge/corner rays wll be zero out. Since I am lack of experiences in interative methods, my question is what the best or common practice will be to handle this? Thanks a lot. Regards, Chao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: raybox÷1.png Type: image/png Size: 292092 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Feb 21 07:38:26 2018 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 21 Feb 2018 13:38:26 +0100 Subject: [Rtk-users] Fwd: High pixel/voxel values in SART In-Reply-To: References: Message-ID: <0543e3d4-8a77-bb75-80a5-f5a9eb530a42@creatis.insa-lyon.fr> Hi Chao, Indeed, you identified the problem quite well. That division is required from the maths of SART, but it brings its set of problems. To make a long story short, I don't know of any best practice in order to solve this problem. My suggestions: - increasing the threshold to the size of a few voxels could do the trick. We've never tried it, and I'm curious about the result - increasing the size of your volume, if you can, and cropping it in the end, is also a good idea, and could work, but it would increase the memory and time requirements, so I'd try it only if the rest fails - the theoretical origin of these artifacts is that in SART, projections are back-projected one by one instead of all together, so when its turn comes, each projection can have a strong influence on the volume. Try the --nprojspersubset argument. I've explained its role in details in an earlier email, https://public.kitware.com/pipermail/rtk-users/2017-July/010470.html, but the email doesn't display correctly, so I'm copy-pasting it below between <<<<<< >>>>>>>. - use conjugate gradient instead, removing the lambda and increasing the number of iterations (at least 30). CG requires more iterations, but each iteration is shorter, and it can run fully on GPU (switch --cudacg on if your GPU has enough memory, off otherwise). Please keep us posted with the results of your experiments, Cyril <<<<<< Hi Lotte, I'm on vacation, with very limited access to the Internet, so I can't look at your SIRT result, but I can answer your question on SART, SIRT and CG : all of those (as well as ART, and another method called OS-SART) minimize the same cost function, which only consists of a least-squares data-attachment term, i.e. || R f - p ||^2, with f the sought volume, p the projections and R the forward projection, but with different algorithms : - SIRT does a simple gradient descent. Since the gradient of the cost function is 2 R* ( R f - p ), with R* the transpose of R, i.e. the back projection, this means that at each iteration, the algorithm needs one forward and one back projection from ALL angles, and one "update" of the volume - ART, SART and OS-SART all use the same strategy: they split the cost function into smaller bits (individual rays for ART, individual projections for SART, sets of several projections for OS-SART, so ART splits the most, and SART the least), and alternately minimize the cost for each bit. We count one iteration when each of the smaller bits has triggered an "update" of the volume. This means that, per iteration, the smaller you split, the more updates of the volume the algorithm performs, so the faster (in terms of number of iterations) you get to convergence. Obviously it does have a dangerous drawback: if data is inconsistent (noise, scatter, truncation, ...), such strategies may not converge - Conjugate gradient minimizes the same cost function, without splitting it (so like SIRT), but using the conjugate gradient algorithm, which converges faster than a simple gradient descent, for two reasons : first, the step size is calculated analytically at each iteration and is optimal, and second, the descent direction is a combination of the gradient at the current iteration and the descent direction at the previous iteration (a "conjugate" direction, thus the algorithm's name) Hope it helps, Cyril >>>>>> On 21/02/2018 12:57, Chao Wu wrote: > L.S., > > I was working on FDK in the past and interative reconstruction methods > are still new to me. > I understand the concept of iteratvie methods but are not aware of > technical details in implementation. > > Recently I am trying SART but got streak artefacts in reconstructed > slices, as well as dots with very high value (both negative and > positive) at corners of slices. > When I checked intermediate images in the pipleline I found that those > are introduced in itk::DivideOrZeroOutImageFilter. > You can see from the attached picture: the left half shows the output > of?rtk::RayBoxIntersectionImageFilter and the right half the output of > itk::DivideOrZeroOutImageFilter, both during processing of the first > projection in the first iteration. > Apparently, although it contains the whole object, my volume is > relatively small compared to the size of the detector images. > Then the rays intersecting the volume near corners and edges result in > small values in the output of the raybox filter, and subsequently > magnify the pixel values largely after division. > This may not be a problem if the detector images are noiseless, but in > practice this will magnify the noise and they will stay as streaks and > dots in slices. > > To correct for this I have something in mind, such as making the > volume bigger and cropping the detector images so that corners and > edges of the volume do not project to the cropped detector; or > increasing the threshold in the divide filter so that low values from > edge/corner rays wll be zero out. Since I am lack of experiences in > interative methods, my question is what the best or common practice > will be to handle this? Thanks a lot. > > Regards, > Chao > > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Wed Feb 21 10:52:50 2018 From: wuchao04 at gmail.com (Chao Wu) Date: Wed, 21 Feb 2018 16:52:50 +0100 Subject: [Rtk-users] Fwd: High pixel/voxel values in SART In-Reply-To: <0543e3d4-8a77-bb75-80a5-f5a9eb530a42@creatis.insa-lyon.fr> References: <0543e3d4-8a77-bb75-80a5-f5a9eb530a42@creatis.insa-lyon.fr> Message-ID: Hi Cyril, Thanks for your suggestion. I have tried increasing the threshold. My reconstruced slices are 32x32 mm so any rays travelling through the volume shorter than 13 mm won't cross the 32 mm diameter cylinderical object region (except for the two ends which is not of interest). To leave some margin I set a threshold of 7 mm. See the attached picture for the results of one SART iteration. The left one is with the default threshold. You can see dark and bright dots at the corners and some streaks coming from the topleft corner. The right one is with 7 mm threshold and the slice is clean except for a trace of a circle outside which is easy to remove afterwards. So this works. I don't think that incresing the volume and cropping it in the end will simply work unless the enlarged volume's projection is bigger than the detector image; becasue the problematic values are not only at edge and corner voxels but are also spread in the volume as streaks by the forward projector as shown in the left picture. I believe that OS-SART and SIRT can mitigate this problem too since they are less sensitive to noise, although they are slower. I will move to CG once I have a good SART implementation for the big datasets in my group. There are still a lot of challenges to me. Unlike in FDK you can reconstruct a small subvolume directly, with iterative methods (I believe) I have to always reconstruct full slices which results in memory issues especially with CUDA. I need to stream the reconstruction pipeline somehow... Best regards, Chao 2018-02-21 13:38 GMT+01:00 Cyril Mory : > Hi Chao, > > Indeed, you identified the problem quite well. That division is required > from the maths of SART, but it brings its set of problems. To make a long > story short, I don't know of any best practice in order to solve this > problem. My suggestions: > > - increasing the threshold to the size of a few voxels could do the trick. > We've never tried it, and I'm curious about the result > > - increasing the size of your volume, if you can, and cropping it in the > end, is also a good idea, and could work, but it would increase the memory > and time requirements, so I'd try it only if the rest fails > > - the theoretical origin of these artifacts is that in SART, projections > are back-projected one by one instead of all together, so when its turn > comes, each projection can have a strong influence on the volume. Try the > --nprojspersubset argument. I've explained its role in details in an > earlier email, https://public.kitware.com/pipermail/rtk-users/2017-July/ > 010470.html, but the email doesn't display correctly, so I'm copy-pasting > it below between <<<<<< >>>>>>>. > > - use conjugate gradient instead, removing the lambda and increasing the > number of iterations (at least 30). CG requires more iterations, but each > iteration is shorter, and it can run fully on GPU (switch --cudacg on if > your GPU has enough memory, off otherwise). > > Please keep us posted with the results of your experiments, > > Cyril > > <<<<<< > > Hi Lotte, > > I'm on vacation, with very limited access to the Internet, so I can't look > at your SIRT result, but I can answer your question on SART, SIRT and CG : > all of those (as well as ART, and another method called OS-SART) minimize > the same cost function, which only consists of a least-squares > data-attachment term, i.e. || R f - p ||^2, with f the sought volume, p the > projections and R the forward projection, but with different algorithms : > - SIRT does a simple gradient descent. Since the gradient of the cost > function is 2 R* ( R f - p ), with R* the transpose of R, i.e. the back > projection, this means that at each iteration, the algorithm needs one > forward and one back projection from ALL angles, and one "update" of the > volume > - ART, SART and OS-SART all use the same strategy: they split the cost > function into smaller bits (individual rays for ART, individual projections > for SART, sets of several projections for OS-SART, so ART splits the most, > and SART the least), and alternately minimize the cost for each bit. We > count one iteration when each of the smaller bits has triggered an "update" > of the volume. This means that, per iteration, the smaller you split, the > more updates of the volume the algorithm performs, so the faster (in terms > of number of iterations) you get to convergence. Obviously it does have a > dangerous drawback: if data is inconsistent (noise, scatter, truncation, > ...), such strategies may not converge > - Conjugate gradient minimizes the same cost function, without splitting > it (so like SIRT), but using the conjugate gradient algorithm, which > converges faster than a simple gradient descent, for two reasons : first, > the step size is calculated analytically at each iteration and is optimal, > and second, the descent direction is a combination of the gradient at the > current iteration and the descent direction at the previous iteration (a > "conjugate" direction, thus the algorithm's name) > > Hope it helps, > Cyril > >>>>>> > > > > > On 21/02/2018 12:57, Chao Wu wrote: > > L.S., > > I was working on FDK in the past and interative reconstruction methods are > still new to me. > I understand the concept of iteratvie methods but are not aware of > technical details in implementation. > > Recently I am trying SART but got streak artefacts in reconstructed > slices, as well as dots with very high value (both negative and positive) > at corners of slices. > When I checked intermediate images in the pipleline I found that those are > introduced in itk::DivideOrZeroOutImageFilter. > You can see from the attached picture: the left half shows the output > of rtk::RayBoxIntersectionImageFilter and the right half the output of > itk::DivideOrZeroOutImageFilter, both during processing of the first > projection in the first iteration. > Apparently, although it contains the whole object, my volume is relatively > small compared to the size of the detector images. > Then the rays intersecting the volume near corners and edges result in > small values in the output of the raybox filter, and subsequently magnify > the pixel values largely after division. > This may not be a problem if the detector images are noiseless, but in > practice this will magnify the noise and they will stay as streaks and dots > in slices. > > To correct for this I have something in mind, such as making the volume > bigger and cropping the detector images so that corners and edges of the > volume do not project to the cropped detector; or increasing the threshold > in the divide filter so that low values from edge/corner rays wll be zero > out. Since I am lack of experiences in interative methods, my question is > what the best or common practice will be to handle this? Thanks a lot. > > Regards, > Chao > > > > > _______________________________________________ > Rtk-users mailing listRtk-users at public.kitware.comhttps://public.kitware.com/mailman/listinfo/rtk-users > > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sart_th0_7.png Type: image/png Size: 351349 bytes Desc: not available URL: From cyril.mory at creatis.insa-lyon.fr Wed Feb 21 11:07:29 2018 From: cyril.mory at creatis.insa-lyon.fr (Cyril Mory) Date: Wed, 21 Feb 2018 17:07:29 +0100 Subject: [Rtk-users] Fwd: High pixel/voxel values in SART In-Reply-To: References: <0543e3d4-8a77-bb75-80a5-f5a9eb530a42@creatis.insa-lyon.fr> Message-ID: <01dda82d-4514-71d6-49f0-7f3ff6f940bd@creatis.insa-lyon.fr> Chao, Thanks for the feedback. It's a very encouraging first result. I'll mark the trick you used as an issue on github, so as to remember to implement it as an option and test it on more data. Indeed, with iterative methods, you have to process full slices. And it is even worse when you apply spatial regularization, because then reconstructing slices independently is possible, but less relevant than reconstructing a full volume. Cyril On 21/02/2018 16:52, Chao Wu wrote: > Hi Cyril, > > Thanks for your suggestion. > > I have tried increasing the threshold. My reconstruced slices are > 32x32 mm so any rays travelling through the volume shorter than 13 mm > won't cross the 32 mm diameter cylinderical object region (except for > the two ends which is not of interest). > To leave some margin I set a threshold of 7 mm. See the attached > picture for the results of one SART iteration. > The left one is with the default threshold. You can see dark and > bright dots at the corners and some streaks coming from the topleft > corner. > The right one is with 7 mm threshold and the slice is clean except for > a trace of a circle outside which is easy to remove afterwards. > So this works. > > I don't think that incresing the volume and cropping it in the end > will simply work unless the enlarged volume's projection is bigger > than the detector image; becasue the problematic values are not only > at edge and corner voxels but are also spread in the volume as streaks > by the forward projector as shown in the left picture. > > I believe that OS-SART and SIRT can mitigate this problem too since > they are less sensitive to noise, although they are slower. > > I will move to CG once I have a good SART implementation for the big > datasets in my group. There are still a lot of challenges to me. > Unlike in FDK you can reconstruct a small subvolume directly, with > iterative methods (I believe) I have to always reconstruct full slices > which results in memory issues especially with CUDA. I need to stream > the reconstruction pipeline somehow... > > Best regards, > Chao > > 2018-02-21 13:38 GMT+01:00 Cyril Mory >: > > Hi Chao, > > Indeed, you identified the problem quite well. That division is > required from the maths of SART, but it brings its set of > problems. To make a long story short, I don't know of any best > practice in order to solve this problem. My suggestions: > > - increasing the threshold to the size of a few voxels could do > the trick. We've never tried it, and I'm curious about the result > > - increasing the size of your volume, if you can, and cropping it > in the end, is also a good idea, and could work, but it would > increase the memory and time requirements, so I'd try it only if > the rest fails > > - the theoretical origin of these artifacts is that in SART, > projections are back-projected one by one instead of all together, > so when its turn comes, each projection can have a strong > influence on the volume. Try the --nprojspersubset argument. I've > explained its role in details in an earlier email, > https://public.kitware.com/pipermail/rtk-users/2017-July/010470.html > , > but the email doesn't display correctly, so I'm copy-pasting it > below between <<<<<< >>>>>>>. > > - use conjugate gradient instead, removing the lambda and > increasing the number of iterations (at least 30). CG requires > more iterations, but each iteration is shorter, and it can run > fully on GPU (switch --cudacg on if your GPU has enough memory, > off otherwise). > > Please keep us posted with the results of your experiments, > > Cyril > > <<<<<< > > Hi Lotte, > > > I'm on vacation, with very limited access to the Internet, so I > can't look at your SIRT result, but I can answer your question on > SART, SIRT and CG : all of those (as well as ART, and another > method called OS-SART) minimize the same cost function, which only > consists of a least-squares data-attachment term, i.e. || R f - p > ||^2, with f the sought volume, p the projections and R the > forward projection, but with different algorithms : > - SIRT does a simple gradient descent. Since the gradient of the > cost function is 2 R* ( R f - p ), with R* the transpose of R, > i.e. the back projection, this means that at each iteration, the > algorithm needs one forward and one back projection from ALL > angles, and one "update" of the volume > - ART, SART and OS-SART all use the same strategy: they split the > cost function into smaller bits (individual rays for ART, > individual projections for SART, sets of several projections for > OS-SART, so ART splits the most, and SART the least), and > alternately minimize the cost for each bit. We count one iteration > when each of the smaller bits has triggered an "update" of the > volume. This means that, per iteration, the smaller you split, the > more updates of the volume the algorithm performs, so the faster > (in terms of number of iterations) you get to convergence. > Obviously it does have a dangerous drawback: if data is > inconsistent (noise, scatter, truncation, ...), such strategies > may not converge > - Conjugate gradient minimizes the same cost function, without > splitting it (so like SIRT), but using the conjugate gradient > algorithm, which converges faster than a simple gradient descent, > for two reasons : first, the step size is calculated analytically > at each iteration and is optimal, and second, the descent > direction is a combination of the gradient at the current > iteration and the descent direction at the previous iteration (a > "conjugate" direction, thus the algorithm's name) > > Hope it helps, > Cyril > >>>>>> > > > > > On 21/02/2018 12:57, Chao Wu wrote: >> L.S., >> >> I was working on FDK in the past and interative reconstruction >> methods are still new to me. >> I understand the concept of iteratvie methods but are not aware >> of technical details in implementation. >> >> Recently I am trying SART but got streak artefacts in >> reconstructed slices, as well as dots with very high value (both >> negative and positive) at corners of slices. >> When I checked intermediate images in the pipleline I found that >> those are introduced in itk::DivideOrZeroOutImageFilter. >> You can see from the attached picture: the left half shows the >> output of?rtk::RayBoxIntersectionImageFilter and the right half >> the output of itk::DivideOrZeroOutImageFilter, both during >> processing of the first projection in the first iteration. >> Apparently, although it contains the whole object, my volume is >> relatively small compared to the size of the detector images. >> Then the rays intersecting the volume near corners and edges >> result in small values in the output of the raybox filter, and >> subsequently magnify the pixel values largely after division. >> This may not be a problem if the detector images are noiseless, >> but in practice this will magnify the noise and they will stay as >> streaks and dots in slices. >> >> To correct for this I have something in mind, such as making the >> volume bigger and cropping the detector images so that corners >> and edges of the volume do not project to the cropped detector; >> or increasing the threshold in the divide filter so that low >> values from edge/corner rays wll be zero out. Since I am lack of >> experiences in interative methods, my question is what the best >> or common practice will be to handle this? Thanks a lot. >> >> Regards, >> Chao >> >> >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> https://public.kitware.com/mailman/listinfo/rtk-users >> > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredrik.hellman at gmail.com Thu Feb 22 05:11:36 2018 From: fredrik.hellman at gmail.com (Fredrik Hellman) Date: Thu, 22 Feb 2018 11:11:36 +0100 Subject: [Rtk-users] Online reconstruction Message-ID: Hi, I am wondering if it is possible to do "online" reconstruction with RTK using FDK backprojection, i.e. that you populate the geometry object angle by angle and backproject only one or a few projections at a time with the geometry object not fully populated initially. I have seen that the backprojection can be done projection by projection, but it seems that the geometry object must be populated with all angles before starting. For example in the code for parker weighting, it seems that the overscan angle is estimated based on the actual angles in the geometry object. Also, computing the angular gap weighting done in the 2D weighting depends on adjacent angles. Best regards, Fredrik Hellman -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuchao04 at gmail.com Thu Feb 22 08:05:48 2018 From: wuchao04 at gmail.com (Chao Wu) Date: Thu, 22 Feb 2018 14:05:48 +0100 Subject: [Rtk-users] Online reconstruction In-Reply-To: References: Message-ID: Hi Fredrik, If the geometry and projection angles are known in advance, I think it is possible to perform FDK online. Regards, Chao 2018-02-22 11:11 GMT+01:00 Fredrik Hellman : > Hi, > > I am wondering if it is possible to do "online" reconstruction with RTK > using FDK backprojection, i.e. that you populate the geometry object angle > by angle and backproject only one or a few projections at a time with the > geometry object not fully populated initially. > > I have seen that the backprojection can be done projection by projection, > but it seems that the geometry object must be populated with all angles > before starting. For example in the code for parker weighting, it seems > that the overscan angle is estimated based on the actual angles in the > geometry object. Also, computing the angular gap weighting done in the 2D > weighting depends on adjacent angles. > > Best regards, > Fredrik Hellman > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Feb 22 09:36:20 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 22 Feb 2018 15:36:20 +0100 Subject: [Rtk-users] Online reconstruction In-Reply-To: References: Message-ID: Hi, Yes, you can even if you don't know the geometry in advance. The application rtkinlinefdk and its code rtkinlinefdk.cxx illustrates this. Best regards, Simon On Thu, Feb 22, 2018 at 2:05 PM, Chao Wu wrote: > Hi Fredrik, If the geometry and projection angles are known in advance, I > think it is possible to perform FDK online. > Regards, Chao > > 2018-02-22 11:11 GMT+01:00 Fredrik Hellman : > >> Hi, >> >> I am wondering if it is possible to do "online" reconstruction with RTK >> using FDK backprojection, i.e. that you populate the geometry object angle >> by angle and backproject only one or a few projections at a time with the >> geometry object not fully populated initially. >> >> I have seen that the backprojection can be done projection by projection, >> but it seems that the geometry object must be populated with all angles >> before starting. For example in the code for parker weighting, it seems >> that the overscan angle is estimated based on the actual angles in the >> geometry object. Also, computing the angular gap weighting done in the 2D >> weighting depends on adjacent angles. >> >> Best regards, >> Fredrik Hellman >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> https://public.kitware.com/mailman/listinfo/rtk-users >> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkylove2323 at gmail.com Fri Feb 23 10:08:59 2018 From: lkylove2323 at gmail.com (Daniel lee) Date: Sat, 24 Feb 2018 00:08:59 +0900 Subject: [Rtk-users] Errors occurred during configuring RTK examples: RTKTargets.cmake is missing Message-ID: Hi, I`m almost newbie for RTK. When I try to configure example (e.g. HelloWorld or FirstReconstruction), there is an error message like following. [image: ?? ??? 1] I don`t have the file, RTKTargets.cmake. I tried with RTK 1.4.0, ITK 4.13.0, VS2015 I`m just looking for someone who will help me to make it. Sincerely K.Y. Daniel Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9235 bytes Desc: not available URL: From lotte.schyns at maastro.nl Fri Feb 23 11:37:25 2018 From: lotte.schyns at maastro.nl (Lotte Schyns) Date: Fri, 23 Feb 2018 16:37:25 +0000 Subject: [Rtk-users] RTK Scatter Correction In-Reply-To: References: <12be0887-19fc-388d-6f75-80b79c72e19d@maastro.nl> Message-ID: Thank you Simon. I see that the implementation of the Boellaard method is indeed not as complete as in the paper. I will have a go with the other methods. Especially the MC one seems interesting. On 21-02-18 08:50, Simon Rit wrote: Dear Lotte, These two options address different issues. The first one deals with the patient scatter and the second one with the detector glare which might come from the detector scatter (but maybe not (only)). See the work of, e.g., Poludniowski to understand the difference dx.doi.org/10.1088/0031-9155/54/12/016 dx.doi.org/10.1088/0031-9155/56/6/019 You'll see that the second paper suggests that both must be corrected. I haven't used those implementations a lot but I think the detector glare follows closely Poludniowski's paper but Boellaard does not. Boellaard implementation is the simplest option when only a constant is subtracted and I actually believe that it's not working properly (but this would need to be checked, it's been on my todo list for a long while). There are other options: - auto-detection of I0 in air does something similar to Boellaard's but differently http://www.openrtk.org/Doxygen/classrtk_1_1I0EstimationProjectionFilter.html - empirical cupping correction: https://doi.org/10.1118/1.2188076 - Monte Carlo correction if you have a CT image of your object. The tools to do this are open source, see https://doi.org/10.1016/j.phro.2017.09.002 I hope this helps, Simon On Tue, Feb 20, 2018 at 3:14 PM, Lotte Schyns > wrote: Hello, I'm investigating the possibility to perform scatter corrections in RTK. From what I could find, there seem to be two options (correct me if I'm wrong): 1) rtk::BoellaardScatterCorrectionImageFilter (Boellaard paper) 2) rtk::ScatterGlareCorrectionImageFilter (Poludniowski paper) Since both methods are based on a deconvolution approach using the edge-spread function, I was wondering what the difference in implementation is and in which cases one method would be preferred over the other. Lotte _______________________________________________ Rtk-users mailing list Rtk-users at public.kitware.com https://public.kitware.com/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreasg at phys.au.dk Fri Feb 23 15:48:26 2018 From: andreasg at phys.au.dk (Andreas Gravgaard Andersen) Date: Fri, 23 Feb 2018 21:48:26 +0100 Subject: [Rtk-users] Errors occurred during configuring RTK examples: RTKTargets.cmake is missing In-Reply-To: References: Message-ID: Hi Daniel I'm not exactly sure what you attempted, but it could look like you were trying to make CMake configure the path "E:/Source/RTK-1.4.0(ITK4.13.0)/examples" *alone*. In which case: You will need to configure, generate and build ITK and RTK first. Follow the instructions on http://wiki.openrtk.org/index.php/RTK_wiki_help Start at "Step 0" and see if you have missed anything on the way. Best regards Andreas __________________________________ Andreas Gravgaard Andersen Department of Oncology, Aarhus University Hospital N?rrebrogade 44, 8000, Aarhus C Mail: andreasg at phys.au.dk Cell: +45 3165 8140 On 23 February 2018 at 16:08, Daniel lee wrote: > Hi, I`m almost newbie for RTK. > > > When I try to configure example (e.g. HelloWorld or FirstReconstruction), > there is an error message like following. > > [image: ?? ??? 1] > > I don`t have the file, RTKTargets.cmake. > > > I tried with RTK 1.4.0, ITK 4.13.0, VS2015 > > > I`m just looking for someone who will help me to make it. > > > Sincerely > > > K.Y. Daniel Lee > > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9235 bytes Desc: not available URL: From wuchao04 at gmail.com Fri Feb 23 16:03:40 2018 From: wuchao04 at gmail.com (Chao Wu) Date: Fri, 23 Feb 2018 22:03:40 +0100 Subject: [Rtk-users] Hounsfield Unit Conversion - New User In-Reply-To: References: Message-ID: This way you have correct_pixel_value = 100 + 0 * original_pixel_value Then you have all your projections flat with value 100. If you set --wpc=a,b,c,d,... it means correct_pixel_value = a + b * original_pixel_value + c * original_pixel_value^2 + d * original_pixel_value^3 + ... You have to figure out proper values yourself. 2018-02-23 18:25 GMT+01:00 muhammad waqar : > Hello, > > I'm still having a hard time using --wpc in the rtkfdk tool. > This is how im using it, could you please tell me how im doing it wrong? > > /RTK-bin/bin/rtkfdk \ > --lowmem \ > --geometry $RtkGeoOutLoc \ > --path $IMGLOC \ > --regexp '.*.his' \ > --output /Sorted_4DCBCT_Data/$FileName/sorted/wpcTest/WPCtest100b.mha' > \ > --verbose \ > -spr=24 \ > --wpc= 100 0 \ > --spacing $Space,$Space,$Space \ > --dimension $Dim,$Dim,$Dim > > Thank you very kidnly, > ? > > > Phone: (647)885-5292 > > On Mon, Feb 19, 2018 at 6:00 AM, Chao Wu wrote: > >> And be aware that --wpc receives polynomial coefficients, and if you only >> specify one number this will become the constant term only, which is >> probably not your intent. >> Regards, Chao >> >> >> 2018-02-19 7:39 GMT+01:00 Simon Rit : >> >>> Hi, >>> To convert to HU, you need to measure a phantom with water and air and >>> to apply the formula for HU conversion (see wikipedia >>> ). Note that due to >>> scatter in (Elekta) cone-beam CTs, you will have difficulties in doing this >>> conversion due to strong artifacts. >>> The --wpc coefficients can allow you to apply the slope of this formula >>> but the intercept must be subtracted after reconstruction. We illustrate >>> here how to use >>> wpc. >>> Good luck, >>> Simon >>> >>> On Mon, Feb 19, 2018 at 1:01 AM, muhammad waqar < >>> muhammad.s.waqar at gmail.com> wrote: >>> >>>> Hello all, >>>> >>>> I'm a new user to ITK/RTK and I'm having a few issues: >>>> >>>> Following the ElektaReconstruction steps on the wiki, I was able to >>>> reconstruct my scan. However, the window and level are still in attenuation >>>> coefficients. How can I convert my scan to Hounsfield Units? >>>> >>>> Whenever I put in any value for --wpc in RTKFDK (I've tried values from >>>> 0.015-150) my recon results in a 'blank' as is attached below. >>>> >>>> Any help in resolving this would be great. >>>> >>>> Kindly, >>>> -Waqar Muhammad >>>> Carleton University >>>> >>>> >>>> ? >>>> ? >>>> >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> https://public.kitware.com/mailman/listinfo/rtk-users >>>> >>>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> https://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: waqar.png Type: image/png Size: 26199 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-18 at 6.55.27 PM.png Type: image/png Size: 225874 bytes Desc: not available URL: From lkylove2323 at gmail.com Fri Feb 23 18:01:54 2018 From: lkylove2323 at gmail.com (Daniel lee) Date: Sat, 24 Feb 2018 08:01:54 +0900 Subject: [Rtk-users] Errors occurred during configuring RTK examples: RTKTargets.cmake is missing In-Reply-To: References: Message-ID: Thanks for your reply I have already configured, generated and built ITK and RTK. (without any error) and I tried to do cmake inside of RTK directory that I made. Now I doubt that RTK is built properly or even CUDA 8.0. I tried to do all things again composedly, I get some option is not found CUDA_SDK_ROOT_DIR-NOTFOUND CUDA_nvcuvenc_LIBRARY-NOTFOUND If it is possible, It will be very helpful for me to get screen shot of proper cmake options from others -Configuration RTK-1.4.0 ITK4.13.0 MSVS2015 64bit My basic question is still open to help. Best regards, K.Y.Daniel Lee 2018-02-24 5:48 GMT+09:00 Andreas Gravgaard Andersen : > Hi Daniel > > I'm not exactly sure what you attempted, but it could look like you were > trying to make CMake configure the path "E:/Source/RTK-1.4.0(ITK4.13.0)/examples" > *alone*. > In which case: You will need to configure, generate and build ITK and RTK > first. > Follow the instructions on http://wiki.openrtk.org/index.php/RTK_wiki_help > Start at "Step 0" and see if you have missed anything on the way. > > Best regards > Andreas > > > __________________________________ > > Andreas Gravgaard Andersen > > Department of Oncology, > > Aarhus University Hospital > > N?rrebrogade 44, > > > 8000, Aarhus C > > > Mail: andreasg at phys.au.dk > > Cell: +45 3165 8140 > > > > On 23 February 2018 at 16:08, Daniel lee wrote: > >> Hi, I`m almost newbie for RTK. >> >> >> When I try to configure example (e.g. HelloWorld or FirstReconstruction), >> there is an error message like following. >> >> [image: ?? ??? 1] >> >> I don`t have the file, RTKTargets.cmake. >> >> >> I tried with RTK 1.4.0, ITK 4.13.0, VS2015 >> >> >> I`m just looking for someone who will help me to make it. >> >> >> Sincerely >> >> >> K.Y. Daniel Lee >> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> https://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9235 bytes Desc: not available URL: From andreasg at phys.au.dk Fri Feb 23 18:47:58 2018 From: andreasg at phys.au.dk (Andreas Gravgaard Andersen) Date: Sat, 24 Feb 2018 00:47:58 +0100 Subject: [Rtk-users] Errors occurred during configuring RTK examples: RTKTargets.cmake is missing In-Reply-To: References: Message-ID: Hi again, First: "CUDA_SDK_ROOT_DIR-NOTFOUND" "CUDA_nvcuvenc_LIBRARY-NOTFOUND" Doesn't matter, i.e. these are not necessary for cmake to find the necessary files. Second: If rtkcuda.lib is not in [your biuld directory]/bin/[Debug or Release]/ it would mean something went wrong - probably in your configuration. Make sure to enable CUDA in the RTK CMake config, i.e. tick "RTK_USE_CUDA". Also, build RTK in the same configuration (release, debug or whatever) as the project that depends on it (otherwise visual studio might complain) If this does not solve your problem try to build the rtkcuda project (when the entire solution is open in Visual studio) by itself and watch the output for anything suspicious. Beset regards Andreas __________________________________ Andreas Gravgaard Andersen Department of Oncology, Aarhus University Hospital N?rrebrogade 44, 8000, Aarhus C Mail: andreasg at phys.au.dk Cell: +45 3165 8140 On 24 February 2018 at 00:26, Daniel lee wrote: > Dear Andersen > > There is no rtkcuda.lib in my RTK directory. > > Could you give me some tips to solve it? > > I will reinstall CUDA and build ITK, RTK again though. > > Thank you for your kind attention. > > Sincerely, > > K. Y. Daniel > > 2018-02-24 8:01 GMT+09:00 Daniel lee : > >> Thanks for your reply >> >> I have already configured, generated and built ITK and RTK. (without any >> error) >> >> and I tried to do cmake inside of RTK directory that I made. >> >> Now I doubt that RTK is built properly or even CUDA 8.0. >> >> I tried to do all things again composedly, I get some option is not found >> >> CUDA_SDK_ROOT_DIR-NOTFOUND >> CUDA_nvcuvenc_LIBRARY-NOTFOUND >> >> If it is possible, It will be very helpful for me to get screen shot of >> proper cmake options from others >> >> >> -Configuration >> RTK-1.4.0 >> ITK4.13.0 >> MSVS2015 64bit >> >> My basic question is still open to help. >> >> Best regards, >> >> K.Y.Daniel Lee >> >> >> 2018-02-24 5:48 GMT+09:00 Andreas Gravgaard Andersen > >: >> >>> Hi Daniel >>> >>> I'm not exactly sure what you attempted, but it could look like you were >>> trying to make CMake configure the path "E:/Source/RTK-1.4.0(ITK4.13.0)/examples" >>> *alone*. >>> In which case: You will need to configure, generate and build ITK and >>> RTK first. >>> Follow the instructions on http://wiki.openrtk.org/ind >>> ex.php/RTK_wiki_help >>> Start at "Step 0" and see if you have missed anything on the way. >>> >>> Best regards >>> Andreas >>> >>> >>> __________________________________ >>> >>> Andreas Gravgaard Andersen >>> >>> Department of Oncology, >>> >>> Aarhus University Hospital >>> >>> N?rrebrogade 44, >>> >>> >>> 8000, Aarhus C >>> >>> >>> Mail: andreasg at phys.au.dk >>> >>> Cell: +45 3165 8140 >>> >>> >>> >>> On 23 February 2018 at 16:08, Daniel lee wrote: >>> >>>> Hi, I`m almost newbie for RTK. >>>> >>>> >>>> When I try to configure example (e.g. HelloWorld or >>>> FirstReconstruction), there is an error message like following. >>>> >>>> [image: ?? ??? 1] >>>> >>>> I don`t have the file, RTKTargets.cmake. >>>> >>>> >>>> I tried with RTK 1.4.0, ITK 4.13.0, VS2015 >>>> >>>> >>>> I`m just looking for someone who will help me to make it. >>>> >>>> >>>> Sincerely >>>> >>>> >>>> K.Y. Daniel Lee >>>> >>>> >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> https://public.kitware.com/mailman/listinfo/rtk-users >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9235 bytes Desc: not available URL: From lkylove2323 at gmail.com Sat Feb 24 07:06:42 2018 From: lkylove2323 at gmail.com (Daniel lee) Date: Sat, 24 Feb 2018 21:06:42 +0900 Subject: [Rtk-users] Errors occurred during configuring RTK examples: RTKTargets.cmake is missing In-Reply-To: References: Message-ID: Dear Andreas Thank you for kind explanation. I will try to build it again from beginning Best regards K. Y. Daniel Lee 2018. 2. 24. ?? 8:47? "Andreas Gravgaard Andersen" ?? ??: Hi again, First: "CUDA_SDK_ROOT_DIR-NOTFOUND" "CUDA_nvcuvenc_LIBRARY-NOTFOUND" Doesn't matter, i.e. these are not necessary for cmake to find the necessary files. Second: If rtkcuda.lib is not in [your biuld directory]/bin/[Debug or Release]/ it would mean something went wrong - probably in your configuration. Make sure to enable CUDA in the RTK CMake config, i.e. tick "RTK_USE_CUDA". Also, build RTK in the same configuration (release, debug or whatever) as the project that depends on it (otherwise visual studio might complain) If this does not solve your problem try to build the rtkcuda project (when the entire solution is open in Visual studio) by itself and watch the output for anything suspicious. Beset regards Andreas __________________________________ Andreas Gravgaard Andersen Department of Oncology, Aarhus University Hospital N?rrebrogade 44, 8000, Aarhus C Mail: andreasg at phys.au.dk Cell: +45 3165 8140 On 24 February 2018 at 00:26, Daniel lee wrote: > Dear Andersen > > There is no rtkcuda.lib in my RTK directory. > > Could you give me some tips to solve it? > > I will reinstall CUDA and build ITK, RTK again though. > > Thank you for your kind attention. > > Sincerely, > > K. Y. Daniel > > 2018-02-24 8:01 GMT+09:00 Daniel lee : > >> Thanks for your reply >> >> I have already configured, generated and built ITK and RTK. (without any >> error) >> >> and I tried to do cmake inside of RTK directory that I made. >> >> Now I doubt that RTK is built properly or even CUDA 8.0. >> >> I tried to do all things again composedly, I get some option is not found >> >> CUDA_SDK_ROOT_DIR-NOTFOUND >> CUDA_nvcuvenc_LIBRARY-NOTFOUND >> >> If it is possible, It will be very helpful for me to get screen shot of >> proper cmake options from others >> >> >> -Configuration >> RTK-1.4.0 >> ITK4.13.0 >> MSVS2015 64bit >> >> My basic question is still open to help. >> >> Best regards, >> >> K.Y.Daniel Lee >> >> >> 2018-02-24 5:48 GMT+09:00 Andreas Gravgaard Andersen > >: >> >>> Hi Daniel >>> >>> I'm not exactly sure what you attempted, but it could look like you were >>> trying to make CMake configure the path "E:/Source/RTK-1.4.0(ITK4.13.0)/examples" >>> *alone*. >>> In which case: You will need to configure, generate and build ITK and >>> RTK first. >>> Follow the instructions on http://wiki.openrtk.org/ind >>> ex.php/RTK_wiki_help >>> Start at "Step 0" and see if you have missed anything on the way. >>> >>> Best regards >>> Andreas >>> >>> >>> __________________________________ >>> >>> Andreas Gravgaard Andersen >>> >>> Department of Oncology, >>> >>> Aarhus University Hospital >>> >>> N?rrebrogade 44, >>> >>> >>> 8000, Aarhus C >>> >>> >>> Mail: andreasg at phys.au.dk >>> >>> Cell: +45 3165 8140 >>> >>> >>> >>> On 23 February 2018 at 16:08, Daniel lee wrote: >>> >>>> Hi, I`m almost newbie for RTK. >>>> >>>> >>>> When I try to configure example (e.g. HelloWorld or >>>> FirstReconstruction), there is an error message like following. >>>> >>>> [image: ?? ??? 1] >>>> >>>> I don`t have the file, RTKTargets.cmake. >>>> >>>> >>>> I tried with RTK 1.4.0, ITK 4.13.0, VS2015 >>>> >>>> >>>> I`m just looking for someone who will help me to make it. >>>> >>>> >>>> Sincerely >>>> >>>> >>>> K.Y. Daniel Lee >>>> >>>> >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> https://public.kitware.com/mailman/listinfo/rtk-users >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9235 bytes Desc: not available URL: From lkylove2323 at gmail.com Sat Feb 24 09:25:25 2018 From: lkylove2323 at gmail.com (Daniel lee) Date: Sat, 24 Feb 2018 23:25:25 +0900 Subject: [Rtk-users] Errors occurred during configuring RTK examples: RTKTargets.cmake is missing In-Reply-To: References: Message-ID: Dear Andreas I tried it again with another PC and It worked. There was some problem with GPU, I think. Thank you for comments. Best regards K. Y. Daniel Lee 2018-02-24 21:06 GMT+09:00 Daniel lee : > Dear Andreas > > Thank you for kind explanation. > > I will try to build it again from beginning > > Best regards > K. Y. Daniel Lee > > 2018. 2. 24. ?? 8:47? "Andreas Gravgaard Andersen" ?? > ??: > > Hi again, > > First: > "CUDA_SDK_ROOT_DIR-NOTFOUND" > "CUDA_nvcuvenc_LIBRARY-NOTFOUND" > Doesn't matter, i.e. these are not necessary for cmake to find the > necessary files. > > Second: > If rtkcuda.lib is not in [your biuld directory]/bin/[Debug or Release]/ > it would mean something went wrong - probably in your configuration. > Make sure to enable CUDA in the RTK CMake config, i.e. tick "RTK_USE_CUDA". > Also, build RTK in the same configuration (release, debug or whatever) as > the project that depends on it (otherwise visual studio might complain) > > If this does not solve your problem try to build the rtkcuda project > (when the entire solution is open in Visual studio) by itself and watch the > output for anything suspicious. > > Beset regards > Andreas > > __________________________________ > > Andreas Gravgaard Andersen > > Department of Oncology, > > Aarhus University Hospital > > N?rrebrogade 44, > > > 8000, Aarhus C > > > Mail: andreasg at phys.au.dk > > Cell: +45 3165 8140 > > > > On 24 February 2018 at 00:26, Daniel lee wrote: > >> Dear Andersen >> >> There is no rtkcuda.lib in my RTK directory. >> >> Could you give me some tips to solve it? >> >> I will reinstall CUDA and build ITK, RTK again though. >> >> Thank you for your kind attention. >> >> Sincerely, >> >> K. Y. Daniel >> >> 2018-02-24 8:01 GMT+09:00 Daniel lee : >> >>> Thanks for your reply >>> >>> I have already configured, generated and built ITK and RTK. (without any >>> error) >>> >>> and I tried to do cmake inside of RTK directory that I made. >>> >>> Now I doubt that RTK is built properly or even CUDA 8.0. >>> >>> I tried to do all things again composedly, I get some option is not found >>> >>> CUDA_SDK_ROOT_DIR-NOTFOUND >>> CUDA_nvcuvenc_LIBRARY-NOTFOUND >>> >>> If it is possible, It will be very helpful for me to get screen shot of >>> proper cmake options from others >>> >>> >>> -Configuration >>> RTK-1.4.0 >>> ITK4.13.0 >>> MSVS2015 64bit >>> >>> My basic question is still open to help. >>> >>> Best regards, >>> >>> K.Y.Daniel Lee >>> >>> >>> 2018-02-24 5:48 GMT+09:00 Andreas Gravgaard Andersen < >>> andreasg at phys.au.dk>: >>> >>>> Hi Daniel >>>> >>>> I'm not exactly sure what you attempted, but it could look like you >>>> were trying to make CMake configure the path "E:/Source/RTK-1.4.0(ITK4.13.0)/examples" >>>> *alone*. >>>> In which case: You will need to configure, generate and build ITK and >>>> RTK first. >>>> Follow the instructions on http://wiki.openrtk.org/ind >>>> ex.php/RTK_wiki_help >>>> Start at "Step 0" and see if you have missed anything on the way. >>>> >>>> Best regards >>>> Andreas >>>> >>>> >>>> __________________________________ >>>> >>>> Andreas Gravgaard Andersen >>>> >>>> Department of Oncology, >>>> >>>> Aarhus University Hospital >>>> >>>> N?rrebrogade 44, >>>> >>>> >>>> 8000, Aarhus C >>>> >>>> >>>> Mail: andreasg at phys.au.dk >>>> >>>> Cell: +45 3165 8140 >>>> >>>> >>>> >>>> On 23 February 2018 at 16:08, Daniel lee wrote: >>>> >>>>> Hi, I`m almost newbie for RTK. >>>>> >>>>> >>>>> When I try to configure example (e.g. HelloWorld or >>>>> FirstReconstruction), there is an error message like following. >>>>> >>>>> [image: ?? ??? 1] >>>>> >>>>> I don`t have the file, RTKTargets.cmake. >>>>> >>>>> >>>>> I tried with RTK 1.4.0, ITK 4.13.0, VS2015 >>>>> >>>>> >>>>> I`m just looking for someone who will help me to make it. >>>>> >>>>> >>>>> Sincerely >>>>> >>>>> >>>>> K.Y. Daniel Lee >>>>> >>>>> >>>>> _______________________________________________ >>>>> Rtk-users mailing list >>>>> Rtk-users at public.kitware.com >>>>> https://public.kitware.com/mailman/listinfo/rtk-users >>>>> >>>>> >>>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9235 bytes Desc: not available URL: From fredrik.hellman at gmail.com Sat Feb 24 13:21:27 2018 From: fredrik.hellman at gmail.com (Fredrik Hellman) Date: Sat, 24 Feb 2018 19:21:27 +0100 Subject: [Rtk-users] Online reconstruction In-Reply-To: References: Message-ID: Thank you for your quick replies! The rtkinlinefdk application is very interesting. I have read it and have two observations: 1. The Parker short scan code is currently disabled in that application. I guess the reason is that it cannot compute the overscan angle without the full geometry, and that it has to be provided in some other way than through the geometry object. 2. The reconstruction processing starts when there are at least 3 projections available (line 281), and it then starts with the second projection. The reason for this (I suppose) is that the angular gaps are computed based on adjacent projection angles, so projection angles 0 and 2 needs to be available to properly weight projection 1. Finally (line 343), the first (0) and last (N) projection are backprojected, which I suppose is because the angular gap between projection N and 0 should be available to the angular gap weighting of projections 0 and N. Typically the RTK Parker weighting puts 0 weight for first and last projection so that the very large angular gap weighting (e.g. for half scans) for first and last projection has no effect. (Line numbers relates to commit 4198eb3 of https://github.com/SimonRit/RTK) Are there any current plans or ideas on how to add support for inline Parker weighting (and allow e.g. half scans)? If not, what is the procedure for contributing to the RTK project with such a Parker weighting implementation? Best regards, Fredrik 343 2018-02-22 15:36 GMT+01:00 Simon Rit : > Hi, > Yes, you can even if you don't know the geometry in advance. The > application rtkinlinefdk and its code rtkinlinefdk.cxx illustrates this. > Best regards, > Simon > > On Thu, Feb 22, 2018 at 2:05 PM, Chao Wu wrote: > >> Hi Fredrik, If the geometry and projection angles are known in advance, I >> think it is possible to perform FDK online. >> Regards, Chao >> >> 2018-02-22 11:11 GMT+01:00 Fredrik Hellman : >> >>> Hi, >>> >>> I am wondering if it is possible to do "online" reconstruction with RTK >>> using FDK backprojection, i.e. that you populate the geometry object angle >>> by angle and backproject only one or a few projections at a time with the >>> geometry object not fully populated initially. >>> >>> I have seen that the backprojection can be done projection by >>> projection, but it seems that the geometry object must be populated with >>> all angles before starting. For example in the code for parker weighting, >>> it seems that the overscan angle is estimated based on the actual angles in >>> the geometry object. Also, computing the angular gap weighting done in the >>> 2D weighting depends on adjacent angles. >>> >>> Best regards, >>> Fredrik Hellman >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> https://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> >> _______________________________________________ >> Rtk-users mailing list >> Rtk-users at public.kitware.com >> https://public.kitware.com/mailman/listinfo/rtk-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Mon Feb 26 01:49:50 2018 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Mon, 26 Feb 2018 07:49:50 +0100 Subject: [Rtk-users] Online reconstruction In-Reply-To: References: Message-ID: Hi, 1. Yes, this is correct. You have to know some information about your geometry to be able to do the Parker weighting: the first angle after the large angular gap and the delta, i.e., the size of the gap. Currently, this is computed from the geometry. Two solutions to be able to do it online: pass the full geometry if you know in advance or set these parameters (needs a modification of Parker weighting class). 2. Correct. BTW, this is done for convenience but some people have suggested to correct this. Contributions are welcome. The best is to fork RTK on github and do a pull request when your code is ready. Simon On Sat, Feb 24, 2018 at 7:21 PM, Fredrik Hellman wrote: > Thank you for your quick replies! > > The rtkinlinefdk application is very interesting. I have read it and have > two observations: > > 1. The Parker short scan code is currently disabled in that application. I > guess the reason is that it cannot compute the overscan angle without the > full geometry, and that it has to be provided in some other way than > through the geometry object. > > 2. The reconstruction processing starts when there are at least 3 > projections available (line 281), and it then starts with the second > projection. The reason for this (I suppose) is that the angular gaps are > computed based on adjacent projection angles, so projection angles 0 and 2 > needs to be available to properly weight projection 1. Finally (line 343), > the first (0) and last (N) projection are backprojected, which I suppose is > because the angular gap between projection N and 0 should be available to > the angular gap weighting of projections 0 and N. Typically the RTK Parker > weighting puts 0 weight for first and last projection so that the very > large angular gap weighting (e.g. for half scans) for first and last > projection has no effect. (Line numbers relates to commit 4198eb3 of > https://github.com/SimonRit/RTK) > > Are there any current plans or ideas on how to add support for inline > Parker weighting (and allow e.g. half scans)? If not, what is the procedure > for contributing to the RTK project with such a Parker weighting > implementation? > > Best regards, > Fredrik > > > 343 > > 2018-02-22 15:36 GMT+01:00 Simon Rit : > >> Hi, >> Yes, you can even if you don't know the geometry in advance. The >> application rtkinlinefdk and its code rtkinlinefdk.cxx illustrates this. >> Best regards, >> Simon >> >> On Thu, Feb 22, 2018 at 2:05 PM, Chao Wu wrote: >> >>> Hi Fredrik, If the geometry and projection angles are known in advance, >>> I think it is possible to perform FDK online. >>> Regards, Chao >>> >>> 2018-02-22 11:11 GMT+01:00 Fredrik Hellman : >>> >>>> Hi, >>>> >>>> I am wondering if it is possible to do "online" reconstruction with RTK >>>> using FDK backprojection, i.e. that you populate the geometry object angle >>>> by angle and backproject only one or a few projections at a time with the >>>> geometry object not fully populated initially. >>>> >>>> I have seen that the backprojection can be done projection by >>>> projection, but it seems that the geometry object must be populated with >>>> all angles before starting. For example in the code for parker weighting, >>>> it seems that the overscan angle is estimated based on the actual angles in >>>> the geometry object. Also, computing the angular gap weighting done in the >>>> 2D weighting depends on adjacent angles. >>>> >>>> Best regards, >>>> Fredrik Hellman >>>> >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> Rtk-users at public.kitware.com >>>> https://public.kitware.com/mailman/listinfo/rtk-users >>>> >>>> >>> >>> _______________________________________________ >>> Rtk-users mailing list >>> Rtk-users at public.kitware.com >>> https://public.kitware.com/mailman/listinfo/rtk-users >>> >>> >> > > _______________________________________________ > Rtk-users mailing list > Rtk-users at public.kitware.com > https://public.kitware.com/mailman/listinfo/rtk-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: