[Rtk-users] rtk user : FDKConeBeamReconstructionFilter

Simon Rit simon.rit at creatis.insa-lyon.fr
Thu Jan 18 11:31:46 EST 2018


Hi,
My experience is too limited to have a good answer. This is probably very
task dependent.
There are papers on the question of binning:
https://doi.org/10.1088/0031-9155/58/5/1433
I don't know of people considering binning of the volume except for
iterative reconstruction
https://doi.org/10.1088/0031-9155/49/1/010
Maybe others have an opinion?
Simon

On Wed, Jan 17, 2018 at 2:34 PM, Chao Wu <wuchao04 at gmail.com> wrote:

> Hi Simon,
>
> I have related questions. Maybe you or someone else has some empirical experience
> or something like a rule of thumb:
>
> We talk about voxels and pixels of a virtual detector at isocenter, so no
> influence from geometrical magnification.
> Then we can say that the best practice would be setting voxel size similar
> to the pixel size.
>
> My questions are:
>
> - If you are going to use a bigger voxel size as an arbitrary times of
> pixel size such as 1.5x, 2.7x or 6.2x, how do you determine whether you
> should do a binning of the detector images and which binning factor is
> optimal? Should you always make the binned pixel size just bigger or
> smaller than the voxel size (so for the example before you use 2x2, 3x3 and
> 7x7 binning, or 1x1, 2x2 and 6x6, respectively)?
>
> - Instead of detector binning, another solution may be using an upsampled
> voxel grid for reconstruction and then do the binning on the 3D volume. For
> example you may reconstruct a volume in a 2x2x2 finner grid and then
> perform a 2x2x2 binning to the output, if the voxel size is about 2 times
> of the pixel size. Despite that the amount of computation is 8 times
> higher, is there any obvious benefit (resolution, contrast, S/N etc.) under
> this scheme?
>
> Best regards,
> Chao
>
>
> 2018-01-16 18:42 GMT+01:00 Simon Rit <simon.rit at creatis.insa-lyon.fr>:
>
>> Hi Elena,
>> Please post your questions to the mailing list, they can interest
>> everyone in the community and you may get (better) answers before mine.
>> FDK is very simple: it takes the projections, weight them with the cosine
>> + angular gaps, apply the ramp filter with a kernel adapted to the number
>> of pixels and backproject. The backprojection is voxel based so it does a
>> linear interpolation between pixels. There is never any attemps to "adjust"
>> one of these steps according to the pixel and the voxel spacings so the
>> user must manage it. There are different solutions to do it:
>> - do a binning of the projections (average 4 or 16 pixels as you maybe
>> did?). In the command line tool, you can do it with the --binning option.
>> - cut high frequencies during ramp filtering using --hann and --hannY
>> options
>> This is assuming larger voxels than pixels (there is no reason to have
>> smaller voxels than pixels).
>> Let me know if this does not answer your question.
>> Simon
>>
>> On Tue, Jan 16, 2018 at 11:08 AM, Elena Padovani <e.padovani at r4p.it>
>> wrote:
>>
>>> Dear Simon,
>>> I am a new user of the Reconstruction toolkit. I've been testing some
>>> tools lately and i was able to reconstruct using
>>> FDKConeBeamReconstructionFilter. I am now trying to figure out
>>> something more about it and i was wondering if you could help me understand
>>> how rtkFDKConeBeamReconstructionFilter works.
>>>
>>> I am now reading raw images and importing them into the pipeline through
>>> the importFilter and then a tilerFilter is used to create the stack of
>>> projections. i was testing if the reconstructed volume would change using
>>> as input to rtkFDKConeBeamReconstructionFilter images of different
>>> resolutions. More precisely, i tested 2 cases:
>>> - 1) the input images are 1536x1536 with spacing 0.2,0.2 and the
>>> ConstantImageSource volume is 384,384,384 with spacing 0.8,0.8,0.8
>>> - 2) the input images are scaled-down to 384,384 with spacing 0.8,0.8
>>> and the constantImageSource is again 384,384,384 with spacing 0.8,0.8,0.8
>>> The reconstructed volumes appear to be different. Indeed, the range of
>>> values is quite different and in the first case the image does not seem
>>> totally right (even tough the volume is reconstructed). Can you explain me
>>> how  rtkFDKConeBeamReconstructionFilter (or maybe other filters used
>>> inside, such as ExtractImageFIlter) manages different resolution in input
>>> and output ? Does it average the pixel value over the bigger voxel/pixel
>>> area ?
>>>
>>> Thank you in advance for any hint,
>>> Kind regards,
>>>
>>> Elena
>>>
>>>
>>
>> _______________________________________________
>> Rtk-users mailing list
>> Rtk-users at public.kitware.com
>> https://public.kitware.com/mailman/listinfo/rtk-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/rtk-users/attachments/20180118/73115514/attachment.html>


More information about the Rtk-users mailing list