[Rtk-users] Fwd: High pixel/voxel values in SART

Cyril Mory cyril.mory at creatis.insa-lyon.fr
Wed Feb 21 07:38:26 EST 2018


Hi Chao,

Indeed, you identified the problem quite well. That division is required 
from the maths of SART, but it brings its set of problems. To make a 
long story short, I don't know of any best practice in order to solve 
this problem. My suggestions:

- increasing the threshold to the size of a few voxels could do the 
trick. We've never tried it, and I'm curious about the result

- increasing the size of your volume, if you can, and cropping it in the 
end, is also a good idea, and could work, but it would increase the 
memory and time requirements, so I'd try it only if the rest fails

- the theoretical origin of these artifacts is that in SART, projections 
are back-projected one by one instead of all together, so when its turn 
comes, each projection can have a strong influence on the volume. Try 
the --nprojspersubset argument. I've explained its role in details in an 
earlier email, 
https://public.kitware.com/pipermail/rtk-users/2017-July/010470.html, 
but the email doesn't display correctly, so I'm copy-pasting it below 
between <<<<<< >>>>>>>.

- use conjugate gradient instead, removing the lambda and increasing the 
number of iterations (at least 30). CG requires more iterations, but 
each iteration is shorter, and it can run fully on GPU (switch --cudacg 
on if your GPU has enough memory, off otherwise).

Please keep us posted with the results of your experiments,

Cyril

<<<<<<

Hi Lotte,


I'm on vacation, with very limited access to the Internet, so I can't 
look at your SIRT result, but I can answer your question on SART, SIRT 
and CG : all of those (as well as ART, and another method called 
OS-SART) minimize the same cost function, which only consists of a 
least-squares data-attachment term, i.e. || R f - p ||^2, with f the 
sought volume, p the projections and R the forward projection, but with 
different algorithms :
- SIRT does a simple gradient descent. Since the gradient of the cost 
function is 2 R* ( R f - p ), with R* the transpose of R, i.e. the back 
projection, this means that at each iteration, the algorithm needs one 
forward and one back projection from ALL angles, and one "update" of the 
volume
- ART, SART and OS-SART all use the same strategy: they split the cost 
function into smaller bits (individual rays for ART, individual 
projections for SART, sets of several projections for OS-SART, so ART 
splits the most, and SART the least), and alternately minimize the cost 
for each bit. We count one iteration when each of the smaller bits has 
triggered an "update" of the volume. This means that, per iteration, the 
smaller you split, the more updates of the volume the algorithm 
performs, so the faster (in terms of number of iterations) you get to 
convergence. Obviously it does have a dangerous drawback: if data is 
inconsistent (noise, scatter, truncation, ...), such strategies may not 
converge
- Conjugate gradient minimizes the same cost function, without splitting 
it (so like SIRT), but using the conjugate gradient algorithm, which 
converges faster than a simple gradient descent, for two reasons : 
first, the step size is calculated analytically at each iteration and is 
optimal, and second, the descent direction is a combination of the 
gradient at the current iteration and the descent direction at the 
previous iteration (a "conjugate" direction, thus the algorithm's name)

Hope it helps,
Cyril
 >>>>>>




On 21/02/2018 12:57, Chao Wu wrote:
> L.S.,
>
> I was working on FDK in the past and interative reconstruction methods 
> are still new to me.
> I understand the concept of iteratvie methods but are not aware of 
> technical details in implementation.
>
> Recently I am trying SART but got streak artefacts in reconstructed 
> slices, as well as dots with very high value (both negative and 
> positive) at corners of slices.
> When I checked intermediate images in the pipleline I found that those 
> are introduced in itk::DivideOrZeroOutImageFilter.
> You can see from the attached picture: the left half shows the output 
> of rtk::RayBoxIntersectionImageFilter and the right half the output of 
> itk::DivideOrZeroOutImageFilter, both during processing of the first 
> projection in the first iteration.
> Apparently, although it contains the whole object, my volume is 
> relatively small compared to the size of the detector images.
> Then the rays intersecting the volume near corners and edges result in 
> small values in the output of the raybox filter, and subsequently 
> magnify the pixel values largely after division.
> This may not be a problem if the detector images are noiseless, but in 
> practice this will magnify the noise and they will stay as streaks and 
> dots in slices.
>
> To correct for this I have something in mind, such as making the 
> volume bigger and cropping the detector images so that corners and 
> edges of the volume do not project to the cropped detector; or 
> increasing the threshold in the divide filter so that low values from 
> edge/corner rays wll be zero out. Since I am lack of experiences in 
> interative methods, my question is what the best or common practice 
> will be to handle this? Thanks a lot.
>
> Regards,
> Chao
>
>
>
>
> _______________________________________________
> Rtk-users mailing list
> Rtk-users at public.kitware.com
> https://public.kitware.com/mailman/listinfo/rtk-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/rtk-users/attachments/20180221/dae0bb9d/attachment.html>


More information about the Rtk-users mailing list