[Rtk-users] Cuda and RTK

Simon Rit simon.rit at creatis.insa-lyon.fr
Tue May 14 10:42:50 EDT 2019


Hi,
Ok, I think the concept of streams is something different from what I had
in mind. We don't use them (i.e., we use the default). What I meant is that
I believe that memory transfers (cumemcpy) are asynchronous and overlap
with the computation in the current implementation. I'm not sure this is
optimally implemented but I believe there is some.
Looking forward to your contributions!
Simon



On Tue, May 14, 2019 at 3:38 PM Fredrik Hellman <fredrik.hellman at gmail.com>
wrote:

> Hi,
>
> Ok, thank you! Although I have no concrete changes in mind already now, I
> at least know that the itk cuda common code is maintained via RTK and open
> for contributions.
>
> As far as I understand from
> https://devblogs.nvidia.com/gpu-pro-tip-cuda-7-streams-simplify-concurrency,
> the default stream is always synchronous if the code is not compiled with
> the special thread-specific default stream flag.
>
> Yes, it is for tomographic reconstruction, so I use some RTK code, e.g.
> FDK reconstruction. But I am writing some of my own filters that should
> work together with it, using the itk Cuda Image as image type. So no, I
> don't have any special algorithm from RTK in mind..
>
> / Fredrik
>
> Den tors 9 maj 2019 kl 21:46 skrev Simon Rit <
> simon.rit at creatis.insa-lyon.fr>:
>
>> Hi,
>> I would definitely be interested in improving RTK! We haven't discussed
>> about this earlier and I don't see why the community would not be
>> interested in improvements. If you start coding something, my preferences
>> would be to have backward compatible code. Feel free to propose some PRs.
>> I'm a bit surprised about your comment on streams, I was pretty sure that
>> the computation and the data transfer were asynchronous as is. And about
>> the memory pool, why not, that sounds interesting, as long as we can still
>> allocate as much as before. About the streaming part, if does not
>> reallocate if there is no change in memory size I believe.
>> Are your developments related to tomographic reconstruction? If yes, is
>> there some RTK algorithms for which you expect a significant improvement?
>> Thanks for your suggestions,
>> Simon
>>
>> On Thu, May 9, 2019 at 6:08 PM Fredrik Hellman <fredrik.hellman at gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks Simon! I see.
>>>
>>> I am writing a CUDA application of which where RTK and ITK will be
>>> parts, and I want to know how RTK CUDA cooperates with the CUDA
>>> functionality I already have in my application. I have found some behaviors
>>> that would be nice to be able to control/change or at least understand the
>>> design of:
>>>
>>> * Context handling. It appears the CUDA context manager keeps track of
>>> contexts and also can switch between them, I think I would like to be able
>>> to control this on a higher level. I can see situations where multple
>>> libraries compete with the control of contexts.
>>> * Streams. It would be nice to be able to perform data transfer between
>>> CPU/GPU in parallell with computations, but that requires that another
>>> stream than the default stream is used, since the default stream is always
>>> synchronous.
>>> * Memory allocations. When CUDA memory is allocated and deallocated, it
>>> implicitly synchronizes all operations on the GPU (
>>> https://docs.nvidia.com/cuda/cuda-c-programming-guide/#implicit-synchronization).
>>> It would be good if one could either provide allocators/deallocates for a
>>> memory pool which doesn't need to issue the actual allocation calls, but
>>> get buffers from a pre allocated pool. Or find a a way to disable
>>> reallocations (which sometimes can be very frequent in ITK, especially if
>>> doing streaming).
>>>
>>> Are these areas that the community would be interested in too? Has there
>>> been any developments or discussions about this earlier?
>>>
>>> Best regards,
>>> Fredrik Hellman
>>>
>>> On Tue, 30 Apr 2019 at 15:02, Simon Rit <simon.rit at creatis.insa-lyon.fr>
>>> wrote:
>>>
>>>> Hi,
>>>> Everything you're saying is correct. This development was co-funded by
>>>> the RTK consortium and Kitware and the goal was to transfer it to ITK.
>>>> However, this has never occurred and at this stage, it is only maintained
>>>> in RTK. Ideally, this could become an independent ITK module.
>>>> Is it something you'd like/need?
>>>> Cheers,
>>>> Simon
>>>>
>>>> On Tue, Apr 30, 2019 at 2:58 PM Fredrik Hellman <
>>>> fredrik.hellman at gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I have a question about the the code that deals with cuda within RTK.
>>>>>
>>>>> RTK uses itk cuda image, the itk data manager, and itk context manager
>>>>> (but not itk kernel manager so much?) to handle cuda data. Is this
>>>>> maintained as part of RTK or is it maintained somewhere else and only used
>>>>> in RTK? Now when RTK becomes closer to ITK, what will happen to that code?
>>>>>
>>>>> Best regards,
>>>>> Fredrik
>>>>>
>>>> _______________________________________________
>>>>> Rtk-users mailing list
>>>>> Rtk-users at public.kitware.com
>>>>> https://public.kitware.com/mailman/listinfo/rtk-users
>>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/rtk-users/attachments/20190514/416744f3/attachment-0001.html>


More information about the Rtk-users mailing list