[Rtk-users] Use existing Images when copying from/to GPU
C S
clem.schmid at gmail.com
Tue Jul 9 12:23:55 EDT 2019
Hi Simon,
I think I understood the issue.
*Problem:* The CPUBufferPointer of the CudaImage gets altered after calling
.Update() on the backprojection/forwardprojection filter. Because of that,
the existing numpy array (I created the CPU and GPU images from) does not
get changed from the filter.
*Solution:* Obtain the CudaDataManager of the CudaImage *before *applying
the back-/forwardprojection filter, apply the filter and .Update(), set cpu
pointer and update buffer via
manager.SetCPUBufferPointer(cpu_img.GetBufferPointer())
manager.UpdateCPUBuffer()
The data from the cuda_img is correctly written in-place into the numpy
array I constructed the cpu_img from.
I don't know the underlying problem and cause why the CPUBuffer gets
mangled, but this method achieves what I need. Thank you very much for your
advice!
Best
Clemens
Am Mo., 8. Juli 2019 um 18:07 Uhr schrieb C S <clem.schmid at gmail.com>:
> Hi Simon,
>
> I'm not sure I understand but have you tried grafting the Image or
>> CudaImage to an existing itk::Image (the Graft function)?
>>
> I tried that but when I call itk.GetArrayFromImage(cuda_img) on the
> grafted image (cpu_img.Graft(cuda_img)) I get the error ```ValueError:
> PyMemoryView_FromBuffer(): info->buf must not be NULL``` from within ITK
> (or its Python bindings).
>
>
>> Again, I'm not sure I understand but you should be able to graft a
>> CudaImage to another CudaImage.
>>
> If anything I'd like to graft an Image into a CudaImage. When I try
> something like `cuda_img.Graft(cpu_img)` I get a TypeError. If this and
> the Graft'ing above would work (including the array view), that would be
> exactly my initial wish.
>
>
>> You can always ask explicit transfers by calling the functions of the
>> data manager (accessible via CudaImage::GetCudaDataManager())
>>
> I assume you mean manager.UpdateCPUBuffer()? When I run that, the CPU
> image I used to create the GPU image (by this
> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>)
> is not updated.
>
> My scenario is this: I give a numpy array as a volume to be forward
> projected. I get a ImageView from that array, set origin and spacing of
> that image and transfer to GPU via your method
> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>.
> For the output projections, I use an ImageView from a numpy.zeros array
> with according shape, spacing and origin and transfer that to GPU the same
> way. I then use the CudaForwardProjection filter. Now I'd like to have the
> projection data on CPU. Unfortunately, none of the suggested methods worked
> for me other than using an itk.ImageDuplicator on the CudaImage :(
>
> Sorry for the lenghty mail.
>
> Best
> Clemens
>
>
>>
>
>>>
>>> Best
>>> Clemens
>>>
>>> Am Mo., 8. Juli 2019 um 16:20 Uhr schrieb Simon Rit <
>>> simon.rit at creatis.insa-lyon.fr>:
>>>
>>>> Hi,
>>>> Conversion from Image to CudaImage is not optimal. The way I'm doing it
>>>> now is shown in an example in these few lines
>>>> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>.
>>>> I am aware of the problem and discussed it on the ITK forum
>>>> <https://discourse.itk.org/t/shadowed-functions-in-gpuimage-or-cudaimage/1614>
>>>> but I don't have a better solution yet.
>>>> I'm not sure what you mean by explicitely transferring data from/to GPU
>>>> but I guess you can always work with itk::Image and do your own CUDA
>>>> computations in the GenerateData of the ImageFilter if you don't like the
>>>> CudaImage mechanism.
>>>> I hope this helps,
>>>> Simon
>>>>
>>>> On Mon, Jul 8, 2019 at 10:06 PM C S <clem.schmid at gmail.com> wrote:
>>>>
>>>>> Dear RTK users,
>>>>>
>>>>> I'm looking for a way to use exisiting ITK Images (either on GPU or in
>>>>> RAM) when transfering data from/to GPU. That is, not only re-using the
>>>>> Image object, but writing into the memory where its buffer is.
>>>>>
>>>>> Why: As I'm using the Python bindings, I guess this ties in with ITK
>>>>> wrapping the CudaImage type. In
>>>>> https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32 I
>>>>> read that the memory management is done implicitly and the CudaImage can be
>>>>> used with CPU filters. However when using the bindings,
>>>>> only rtk.BackProjectionImageFilter can be used with CudaImages. The other
>>>>> filters complain about not being wrapped for that type.
>>>>>
>>>>> That is why I want to explicitely transfer the data from/to GPU, but
>>>>> preferably using the exisiting Images and buffers. I can't rely on RTK
>>>>> managing GPU memory implicitly.
>>>>>
>>>>>
>>>>> Thank you very much for your help!
>>>>> Clemens
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Rtk-users mailing list
>>>>> Rtk-users at public.kitware.com
>>>>> https://public.kitware.com/mailman/listinfo/rtk-users
>>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/rtk-users/attachments/20190709/1b08d0b7/attachment.html>
More information about the Rtk-users
mailing list