[Paraview] [ParaView] In-Situ Visualization
Berk Geveci
berk.geveci at kitware.com
Fri Feb 26 06:31:24 EST 2010
Yes, it is possible though not necessarily straightforward, depending on how
much visualization algorithm and Cuda/GLSL experience you have. Also, you
will have to write a new mapper since the current one works on data on main
memory. If I was estimating how much this would take, I would estimate no
less than 6 months for something like 4 algorithms and 1 mapper.
On Fri, Feb 26, 2010 at 3:35 AM, liuning <tantics at gmail.com> wrote:
> Hi Berk,
>
> Thank you for your reply. One of the priorities of our in-situ
> visualization work is speed, so moving data to main memory may not be a
> preferred approach. We just need several basic visualization functions(e.g.
> slice and contour ) and analysis functions(e.g. sub-sampling ). Is it
> possible that we write our own filters running on GPU and integrate them
> into the current VTK pipeline? Since the sources, readers,writers and
> filters are configured via XML file, if we set up the appropriate filter
> XML configuration file, will it work for us ?
>
> Thanks again,
>
> -Ning
>
>
> On Thu, Feb 25, 2010 at 10:29 PM, Berk Geveci <berk.geveci at kitware.com>wrote:
>
>> It depends on how you want to do it. If you want to keep the data on the
>> GPU memory and perform all of the analysis/visualization on the GPU, VTK is
>> currently not the right tool. Almost all of VTK's filters work on the CPU
>> whereas rendering happens on the GPU. If you are OK with moving data to main
>> memory whenever you want to do analysis/vis, the in-situ vis work we are
>> doing would be a fit for you. Let me know if you are interested and I can
>> get you started.
>>
>> -berk
>>
>> On Wed, Feb 24, 2010 at 10:06 PM, liuning <tantics at gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> We attempt to do a in-situ visualization to visualize the simulation,
>>> which is mainly performed on GPUs. I noticed that there was a presentation
>>> in Vis09 describing how to do in-situ visualization with the help of Python
>>> scripts. Since the simulation data reside in GPUs, I wonder whether the
>>> approach mentioned above can do this. Or, are there some better methods to
>>> do that?
>>>
>>> Thanks a lot.
>>>
>>> -Ning
>>>
>>>
>>>
>>> _______________________________________________
>>> Powered by www.kitware.com
>>>
>>> Visit other Kitware open-source projects at
>>> http://www.kitware.com/opensource/opensource.html
>>>
>>> Please keep messages on-topic and check the ParaView Wiki at:
>>> http://paraview.org/Wiki/ParaView
>>>
>>> Follow this link to subscribe/unsubscribe:
>>> http://www.paraview.org/mailman/listinfo/paraview
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20100226/01b511a2/attachment.htm>
More information about the ParaView
mailing list