[Rtk-users] Cuda and RTK

Simon Rit simon.rit at creatis.insa-lyon.fr
Thu May 9 15:46:38 EDT 2019


Hi,
I would definitely be interested in improving RTK! We haven't discussed
about this earlier and I don't see why the community would not be
interested in improvements. If you start coding something, my preferences
would be to have backward compatible code. Feel free to propose some PRs.
I'm a bit surprised about your comment on streams, I was pretty sure that
the computation and the data transfer were asynchronous as is. And about
the memory pool, why not, that sounds interesting, as long as we can still
allocate as much as before. About the streaming part, if does not
reallocate if there is no change in memory size I believe.
Are your developments related to tomographic reconstruction? If yes, is
there some RTK algorithms for which you expect a significant improvement?
Thanks for your suggestions,
Simon

On Thu, May 9, 2019 at 6:08 PM Fredrik Hellman <fredrik.hellman at gmail.com>
wrote:

> Hi,
>
> Thanks Simon! I see.
>
> I am writing a CUDA application of which where RTK and ITK will be parts,
> and I want to know how RTK CUDA cooperates with the CUDA functionality I
> already have in my application. I have found some behaviors that would be
> nice to be able to control/change or at least understand the design of:
>
> * Context handling. It appears the CUDA context manager keeps track of
> contexts and also can switch between them, I think I would like to be able
> to control this on a higher level. I can see situations where multple
> libraries compete with the control of contexts.
> * Streams. It would be nice to be able to perform data transfer between
> CPU/GPU in parallell with computations, but that requires that another
> stream than the default stream is used, since the default stream is always
> synchronous.
> * Memory allocations. When CUDA memory is allocated and deallocated, it
> implicitly synchronizes all operations on the GPU (
> https://docs.nvidia.com/cuda/cuda-c-programming-guide/#implicit-synchronization).
> It would be good if one could either provide allocators/deallocates for a
> memory pool which doesn't need to issue the actual allocation calls, but
> get buffers from a pre allocated pool. Or find a a way to disable
> reallocations (which sometimes can be very frequent in ITK, especially if
> doing streaming).
>
> Are these areas that the community would be interested in too? Has there
> been any developments or discussions about this earlier?
>
> Best regards,
> Fredrik Hellman
>
> On Tue, 30 Apr 2019 at 15:02, Simon Rit <simon.rit at creatis.insa-lyon.fr>
> wrote:
>
>> Hi,
>> Everything you're saying is correct. This development was co-funded by
>> the RTK consortium and Kitware and the goal was to transfer it to ITK.
>> However, this has never occurred and at this stage, it is only maintained
>> in RTK. Ideally, this could become an independent ITK module.
>> Is it something you'd like/need?
>> Cheers,
>> Simon
>>
>> On Tue, Apr 30, 2019 at 2:58 PM Fredrik Hellman <
>> fredrik.hellman at gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I have a question about the the code that deals with cuda within RTK.
>>>
>>> RTK uses itk cuda image, the itk data manager, and itk context manager
>>> (but not itk kernel manager so much?) to handle cuda data. Is this
>>> maintained as part of RTK or is it maintained somewhere else and only used
>>> in RTK? Now when RTK becomes closer to ITK, what will happen to that code?
>>>
>>> Best regards,
>>> Fredrik
>>>
>> _______________________________________________
>>> Rtk-users mailing list
>>> Rtk-users at public.kitware.com
>>> https://public.kitware.com/mailman/listinfo/rtk-users
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/rtk-users/attachments/20190509/da330447/attachment.html>


More information about the Rtk-users mailing list