[Rtk-users] Use existing Images when copying from/to GPU

Simon Rit simon.rit at creatis.insa-lyon.fr
Wed Sep 11 04:02:27 EDT 2019


Hi Clemens,
Sorry for not looking at your issue for a long time. I checked what you
were saying and:
- if I comment these two lines
<https://gist.github.com/clemisch/9ec752430b2eb65d9840d69fc8c058bb#file-forwardprojection-py-L60-L61>,
indeed I don't get the forward projection since the buffer is still on the
GPU only, as expected.
- if I add a line proj_img_gpu.GetBufferPointer() in place, that does not
help. This is because the output of the forward projection is a new image
which contains the buffer that proj_img_gpu used to handle (but does not
handle anymore).
- if I add a line fwd.GetOutput().GetBufferPointer() or
fwd.GetOutput().GetCudaDataManager().UpdateCPUBuffer(), it works fine.
Can you maybe give an example of something which does not work as expected?
Thanks again,
Simon

On Tue, Jul 16, 2019 at 4:21 PM C S <clem.schmid at gmail.com> wrote:

> Hey Maik,
>
> here is some example code for a forwardprojection on GPU from a numpy
> array volume.
>
> https://gist.github.com/clemisch/9ec752430b2eb65d9840d69fc8c058bb
>
> You may write a projector class around that to save the proj_spacing,
> img_spacing, ... parameters and projector objects. Maybe even re-use the
> GPU images. I constructed this snipped from my framework but did not test
> it by itself; there might be bugs. Let me know if that helps you.
>
>
> Best
> Clemens
>
> Am Di., 16. Juli 2019 um 08:18 Uhr schrieb imt <stille at imt.uni-luebeck.de
> >:
>
>> Hej Clemens,
>>
>> I was following your conversation with Simon. I am quite interested in
>> the problem. Do you have some simple example source code of your solution?
>> That would be really helpful.
>>
>> Thank you in advance
>> Maik
>> On 9 Jul 2019, 18:24 +0200, C S <clem.schmid at gmail.com>, wrote:
>>
>> Hi Simon,
>>
>> I think I understood the issue.
>>
>> *Problem:* The CPUBufferPointer of the CudaImage gets altered after
>> calling .Update() on the backprojection/forwardprojection filter. Because
>> of that, the existing numpy array (I created the CPU and GPU images from)
>> does not get changed from the filter.
>> *Solution:* Obtain the CudaDataManager of the CudaImage *before*
>> applying the back-/forwardprojection filter, apply the filter and
>> .Update(), set cpu pointer and update buffer via
>>
>> manager.SetCPUBufferPointer(cpu_img.GetBufferPointer())
>> manager.UpdateCPUBuffer()
>>
>> The data from the cuda_img is correctly written in-place into the numpy
>> array I constructed the cpu_img from.
>>
>> I don't know the underlying problem and cause why the CPUBuffer gets
>> mangled, but this method achieves what I need. Thank you very much for your
>> advice!
>>
>>
>> Best
>> Clemens
>>
>> Am Mo., 8. Juli 2019 um 18:07 Uhr schrieb C S <clem.schmid at gmail.com>:
>>
>> Hi Simon,
>>
>> I'm not sure I understand but have you tried grafting the Image or
>> CudaImage to an existing itk::Image (the Graft function)?
>>
>> I tried that but when I call itk.GetArrayFromImage(cuda_img) on the
>> grafted image (cpu_img.Graft(cuda_img)) I get the error  ```ValueError:
>> PyMemoryView_FromBuffer(): info->buf must not be NULL``` from within ITK
>> (or its Python bindings).
>>
>>
>> Again, I'm not sure I understand but you should be able to graft a
>> CudaImage to another CudaImage.
>>
>>  If anything I'd like to graft an Image into a CudaImage. When I try
>> something like `cuda_img.Graft(cpu_img)` I get a TypeError. If this and
>> the Graft'ing above would work (including the array view), that would be
>> exactly my initial wish.
>>
>>
>> You can always ask explicit transfers by calling the functions of the
>> data manager (accessible via CudaImage::GetCudaDataManager())
>>
>> I assume you mean manager.UpdateCPUBuffer()? When I run that, the CPU
>> image I used to create the GPU image (by this
>> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>)
>> is not updated.
>>
>> My scenario is this: I give a numpy array as a volume to be forward
>> projected. I get a ImageView from that array, set origin and spacing of
>> that image and transfer to GPU via your method
>> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>.
>> For the output projections, I use an ImageView from a numpy.zeros array
>> with according shape, spacing and origin and transfer that to GPU the same
>> way. I then use the CudaForwardProjection filter. Now I'd like to have the
>> projection data on CPU. Unfortunately, none of the suggested methods worked
>> for me other than using an itk.ImageDuplicator on the CudaImage :(
>>
>> Sorry for the lenghty mail.
>>
>> Best
>> Clemens
>>
>>
>>
>>
>>
>> Best
>> Clemens
>>
>> Am Mo., 8. Juli 2019 um 16:20 Uhr schrieb Simon Rit <
>> simon.rit at creatis.insa-lyon.fr>:
>>
>> Hi,
>> Conversion from Image to CudaImage is not optimal. The way I'm doing it
>> now is shown in an example in these few lines
>> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>.
>> I am aware of the problem and discussed it on the ITK forum
>> <https://discourse.itk.org/t/shadowed-functions-in-gpuimage-or-cudaimage/1614>
>> but I don't have a better solution yet.
>> I'm not sure what you mean by explicitely transferring data from/to GPU
>> but I guess you can always work with itk::Image and do your own CUDA
>> computations in the GenerateData of the ImageFilter if you don't like the
>> CudaImage mechanism.
>> I hope this helps,
>> Simon
>>
>> On Mon, Jul 8, 2019 at 10:06 PM C S <clem.schmid at gmail.com> wrote:
>>
>> Dear RTK users,
>>
>> I'm looking for a way to use exisiting ITK Images (either on GPU or in
>> RAM) when transfering data from/to GPU. That is, not only re-using the
>> Image object, but writing into the memory where its buffer is.
>>
>> Why: As I'm using the Python bindings, I guess this ties in with ITK
>> wrapping the CudaImage type. In
>> https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32 I
>> read that the memory management is done implicitly and the CudaImage can be
>> used with CPU filters. However when using the bindings,
>> only rtk.BackProjectionImageFilter can be used with CudaImages. The other
>> filters complain about not being wrapped for that type.
>>
>> That is why I want to explicitely transfer the data from/to GPU, but
>> preferably using the exisiting Images and buffers. I can't rely on RTK
>> managing GPU memory implicitly.
>>
>>
>> Thank you very much for your help!
>> Clemens
>>
>>
>> _______________________________________________
>> Rtk-users mailing list
>> Rtk-users at public.kitware.com
>> https://public.kitware.com/mailman/listinfo/rtk-users
>>
>> _______________________________________________
>> Rtk-users mailing list
>> Rtk-users at public.kitware.com
>> https://public.kitware.com/mailman/listinfo/rtk-users
>>
>> _______________________________________________
> Rtk-users mailing list
> Rtk-users at public.kitware.com
> https://public.kitware.com/mailman/listinfo/rtk-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/rtk-users/attachments/20190911/260a6ea7/attachment.html>


More information about the Rtk-users mailing list