<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jul 8, 2019 at 10:45 PM C S <<a href="mailto:clem.schmid@gmail.com">clem.schmid@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Simon,<div><br></div><div>thank you for your swift reply and suggestions!</div><div><br></div><div>In fact I'm already using <a href="https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70" target="_blank">your snippet</a> for cpu->gpu transfer. My main issue is using the existing cpu image when transfering back to cpu, which I have not been able to do. I can use the itk.ImageDuplicator for getting the data into RAM but I haven't found a way to point the itk.ImageDuplicator's <b>output</b> to an exisiting Image. It always creates a new Image and allocates new memory AFAIK. </div></div></blockquote><div>I'm not sure I understand but have you tried grafting the Image or CudaImage to an existing itk::Image (the Graft function)?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Once I can do that the next optimization step would be to also use an exisiting CudaImage (with according buffer) when transfering cpu->gpu, <b>contrary to your snippet.</b> For that I have found no way at all so far.</div></div></blockquote><div>Again, I'm not sure I understand but you should be able to graft a CudaImage to another CudaImage.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>To clarify, I do not want to perform my own CUDA computations. Using RTK's CUDA forward/backprojectors is the main feature I want to use from RTK. With <i>explicit</i> I mean doing the transfer myself instead of relying on RTK's implicit methods implied in <a href="https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32" target="_blank">the source code</a>. </div></div></blockquote><div>You can always ask explicit transfers by calling the functions of the data manager (accessible via <span class="gmail-pl-en">CudaImage::GetCudaDataManager</span>())</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div><br></div><div>Best</div><div>Clemens</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Am Mo., 8. Juli 2019 um 16:20 Uhr schrieb Simon Rit <<a href="mailto:simon.rit@creatis.insa-lyon.fr" target="_blank">simon.rit@creatis.insa-lyon.fr</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi,</div><div>Conversion from Image to CudaImage is not optimal. The way I'm doing it now is shown in an example in <a href="https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70" target="_blank">these few lines</a>. I am aware of the problem and discussed it on the <a href="https://discourse.itk.org/t/shadowed-functions-in-gpuimage-or-cudaimage/1614" target="_blank">ITK forum</a> but I don't have a better solution yet.</div><div>I'm not sure what you mean by explicitely transferring data from/to GPU but I guess you can always work with itk::Image and do your own CUDA computations in the GenerateData of the ImageFilter if you don't like the CudaImage mechanism.</div><div>I hope this helps,</div><div>Simon<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jul 8, 2019 at 10:06 PM C S <<a href="mailto:clem.schmid@gmail.com" target="_blank">clem.schmid@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Dear RTK users,<div><br></div><div>I'm looking for a way to use exisiting ITK Images (either on GPU or in RAM) when transfering data from/to GPU. That is, not only re-using the Image object, but writing into the memory where its buffer is. </div><div><br></div><div>Why: As I'm using the Python bindings, I guess this ties in with ITK wrapping the CudaImage type. In <a href="https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32" target="_blank">https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32</a> I read that the memory management is done implicitly and the CudaImage can be used with CPU filters. However when using the bindings, only rtk.BackProjectionImageFilter can be used with CudaImages. The other filters complain about not being wrapped for that type. </div><div><br></div><div>That is why I want to explicitely transfer the data from/to GPU, but preferably using the exisiting Images and buffers. I can't rely on RTK managing GPU memory implicitly.</div><div><br></div><div><br></div><div>Thank you very much for your help!</div><div>Clemens</div><div><br></div><div><br></div></div>
_______________________________________________<br>
Rtk-users mailing list<br>
<a href="mailto:Rtk-users@public.kitware.com" target="_blank">Rtk-users@public.kitware.com</a><br>
<a href="https://public.kitware.com/mailman/listinfo/rtk-users" rel="noreferrer" target="_blank">https://public.kitware.com/mailman/listinfo/rtk-users</a><br>
</blockquote></div>
</blockquote></div>
</blockquote></div></div>