[Paraview] Sanity Check - Parallel GPU rendering

Philippe philippe.raven at gmail.com
Wed Apr 28 03:57:24 EDT 2010


On Wed, Apr 28, 2010 at 7:43 AM, Shree Kumar <shree.shree at gmail.com> wrote:

> Hi Paul,
>
> On Tue, Apr 27, 2010 at 5:06 PM, Paul McIntosh <
> paul.mcintosh at internetscooter.com> wrote:
>
>> Hi All,
>>
>> I am trying to get parallel GPU accelerated volume rendering happening and
>> I
>> just want to confirm my understanding below is correct...
>>
>> For GPU acceleration I will require X Server running on each node so that
>> ParaView can create a local OpenGL context to enable OpenGL calls to the
>> hardware. The OpenGL context needs to use something like NVIDIA libraries
>> as
>> it ignores anything that might have OpenGL implemented in software (e.g.
>> Mesa).
>>
>> Googling shows me that it is possible to create screenless X Server
>> configurations that use video hardware (e.g. with a NVIDIA Tesla). So in a
>> cluster I am assuming that this is what I need to do for each node that I
>> wish to participate in the parallel rendering.
>>
>> Is the above correct? Or is there a simplier way of doing it?
>>
>>
> This approach seems correct.
>
> You would need to configure one X screen per GPU you have.
>
> nvidia-xconfig -a --use-display-device=none
>
> After that, start the X server. Next, write a machine file which will run
> one
> instance of a paraview render on each GPU. You would need to set the
> DISPLAY environment variable  to point to the X server and X screen
> corresponding to each GPU.
>
> Use this machine file with mpirun and use the ParaView client to connect
> to the machine on rank 0.
>
> I'm not much of a ParaView user, but I have setup parallel rendering using
> this method.
>
> HTH
> -- Shree
> http://www.shreekumar.in/
>
> Cheers,
>>
>> Paul
>> ---
>> www.internetscooter.com
>>
>> _______________________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Please keep messages on-topic and check the ParaView Wiki at:
>> http://paraview.org/Wiki/ParaView
>>
>> Follow this link to subscribe/unsubscribe:
>> http://www.paraview.org/mailman/listinfo/paraview
>>
>
> Hi guys.
There's another way to make *real* parallel GPU rendering.
If you want to use the power of some GPU from unrelated cards (non-SLI but
in the same computer) or from some graphical nodes, you can use VirtualGL
with Sort-Last or Sort-First rendering, who will be very usefull and
efficient to get very goods performances, and not just use 1 or 2 local GPU
to render your paraview instance.

Cheers.

Philippe.

>
>
>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20100428/ab486b59/attachment.htm>


More information about the ParaView mailing list