[Rtk-users] Streaming filter in ProjectionsReader
Simon Rit
simon.rit at creatis.insa-lyon.fr
Thu Jun 11 17:35:05 EDT 2015
Sorry, I see what I missed: the whole stack projection is loaded by the
displaced detector filter. I think the best would be to move the displaced
detector filter in the FDKConeBeamReconstructionFilter, between the extract
and the FDKWeightProjectionFilter or to roll back to the CPU displaced
detector imagefilter.
On Thu, Jun 11, 2015 at 11:31 PM, Simon Rit <simon.rit at creatis.insa-lyon.fr>
wrote:
> Interesting problem. First a comment: normally with mhd files, it only
> reads from disk what is requested, not the whole projection. I have
> implemented FDK so that it only request the lines that are necessary for
> the piece of the volume that is being computed. However, the full
> projection is read with some input formats that did not implement streaming
> or if you use the --hannY option or the scatter glare correction. Another
> comment that crosses my mind: once it's been read from disk once, if you
> have enough ram, it will be cached and the second time it reads it should
> be much faster, as if it were in RAM.
> For your solution, I would say that it's already what it does. The reader,
> the preprocessing and the streaming are encapsulated in the
> rtk::ProjectionsReader so if you request the full stack from the
> projections reader, i.e., without
> the --lowmem option, it will compute the whole stack but it will do it
> one projection at a time since there is a streaming filter encapsulated. Is
> this not what you observe? Projections will still be loaded substack per
> substack to the gpu after since there is an extract filter in FDK. In
> short, don't use the --lowmem option and you should be set. If not, let me
> know because there is something I'm missing or not working.
> Simon
>
> On Thu, Jun 11, 2015 at 10:52 PM, Chao Wu <wuchao04 at gmail.com> wrote:
>
>> Thanks Simon, it is clear.
>>
>> I have an issue with rtkfdk. When using cuda, the first several (mini-)
>> filters in rtkfdk look like this:
>> file reader -> preprocessing filters -> streaming filter -> cuda
>> displaced detector filter -> ...
>> When the streaming filter is updated, all preprocessed projection data
>> will be store in RAM, which is not a problem so far.
>> However when executing the cuda displaced detector filter, RAM to GRAM
>> copy will be triggered. In my case the GRAM is not big enough to store
>> the data.
>> I know --lowmem will work but when it combines with --division the
>> projection files will be re-read several times from harddisk, which is
>> quite inefficient and slow.
>>
>> My previous temporary solution is to use the streaming filter as a sort
>> of isolator:
>> Instead of updating the streaming filter in the first step, I only update
>> the last preprocessing filter.
>> This way, when updating the remaining part of the pipeline, the buffered
>> region of the input of the streaming filter will be the full stack of
>> projections, but the buffered region of the output of the streaming filter
>> will just be the requested region by the cuda displaced detector filter
>> (#subsetsize projections). Then there is no problem to copy this small
>> amount of projections from RAM to GRAM during execution of the cuda
>> displaced detector filter.
>> Nevertheless looking at your explanations about the purpose of the
>> streaming filter, I found this is not an optimal solution.
>>
>> Now I am thinking to add a simple filter between the streaming filter and
>> the cuda displace detector filter for isolation, so that its input will
>> buffer the full stack of projections after updating the streaming filter,
>> but its output only buffers the requested region (#subsetsize
>> projections). How do you think about this solution? Any better opinions?
>>
>> Regards,
>> Chao
>>
>>
>> 2015-06-11 19:19 GMT+02:00 Simon Rit <simon.rit at creatis.insa-lyon.fr>:
>>
>>> Not stupid at all. We have recently introduced the possibility to
>>> automatically compute the I0 (pixel value without object). This is the
>>> filter I0EstimationProjectionFilter in the ProjectionsReader, see graph in
>>> doc <http://www.openrtk.org/Doxygen/classrtk_1_1ProjectionsReader.html>.
>>> In many scanners, the exposure varies from frame-to-frame and we wanted
>>> this to be projection-specific which is why we did this. We also think that
>>> all the preprocessing is more efficient per projection but this is pure
>>> conjecture.
>>> There were only pros in our opinion but do you see cons to this solution?
>>> Simon
>>>
>>> On Thu, Jun 11, 2015 at 7:06 PM, Chao Wu <wuchao04 at gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> Maybe a stupid question... what is the purpose of the streaming filter
>>>> at the end of the mini-filter pipeline inside the ProjectionsReader?
>>>> Thanks.
>>>>
>>>> Regards, Chao
>>>>
>>>> _______________________________________________
>>>> Rtk-users mailing list
>>>> Rtk-users at public.kitware.com
>>>> http://public.kitware.com/mailman/listinfo/rtk-users
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/rtk-users/attachments/20150611/316671d3/attachment-0002.html>
More information about the Rtk-users
mailing list