[Paraview-developers] DisplayCluster + ParaView

Biddiscombe, John A. biddisco at cscs.ch
Thu Jun 6 02:19:02 EDT 2013


Ken

>
If you give the arguments -tdx=1 -tdy=1, ParaView should happily create a "tiled" display of a single image that you can then break up and transfer with another software service.
<

That's interesting. So if I do something along the lines of
mpiexec -n 1024 pvserver -tdx=1 -tdy=1
I'll get just one window. But sadly I'll have 1024 framebuffers of 0.5GB or thereabouts each. That's a shame. It certainly be a great starting point for me though.

Question.
Would this scenario be a potential use case for running (not sure if this is the command you'd use, but it's just for illustration)
mpiexec -n 1024 pvserver -use some kind of dataserver : -n 12 pvserver - use some kind of renderserver -tdx=1 -tdy=1 (or 4x3 or whatever)
So that the majority of the heavy lifting is done by the data servers, and the renderservers can do the display with a reduced footprint in terms of framebuffer allocation.

I've never tried using the data/render server mode of operations, but it seems like this might be a good strategy - n'est ce que pas?

Thanks

JB

From: Moreland, Kenneth [mailto:kmorel at sandia.gov]
Sent: 04 June 2013 20:50
To: Biddiscombe, John A.; paraview-developers at paraview.org
Subject: Re: [Paraview-developers] DisplayCluster + ParaView

John,

I have no experience with DisplayCluster and haven't messed with tiled displays in a long time, so I don't have a complete answer for you.  But perhaps I can fill in some holes.

If you give the arguments -tdx=1 -tdy=1, ParaView should happily create a "tiled" display of a single image that you can then break up and transfer with another software service.  The only real problem I can see is that there is no way to directly specify the number of pixels in the display.  The original implementation assumed that node GPUs are connected directly to displays, so just gets the size of the displays from the Xhost screen.  So specifying the size of the framebuffers will require some code modification, but they should be pretty minor ones.

I don't know if anyone has directly interfaced ParaView with the DisplayCluster software.  I have not been following it that closely.  I do remember a presentation on SAGE in the 2008 Ultrascale Visualization Workshop that described using SAGE to stream pixels from ParaView on one cluster to another cluster driving a tiled display very similarly to what you are doing (http://vis.cs.ucdavis.edu/Ultravis08/slides/Leigh-UltraViz.pdf).  If you have no other luck, you might consider contacting those authors.

As far as data replication is concerned, ParaView should handle that for you.  The standard mode of operation is to leave the data distributed and use image composition for parallel rendering.  I believe that if the size of the data being rendering is below the remote rendering threshold, then the data will be distributed among all tiled display renders just as it would be sent to the client, but unless you screwed up your settings that only happens when the data is small enough to be collected.

As far as ParaView is concerned, handling 12 vs 6 vs 1 tiles is about the same because IceT takes care of all that internally.  As far as performance is concerned, I would expect the single huge display to be at least marginally better than the small displays because of better load balancing and fewer possible rendering passes.  However, I do worry about the memory overhead of compositing a single 7920x3060 image.  I expect IceT to create at least 3 framebuffers to simultaneously send, receive, and compress/composite images. By my count that is over 1/2 GB per MPI process.

-Ken

From: <Biddiscombe>, "John A." <biddisco at cscs.ch<mailto:biddisco at cscs.ch>>
Date: Monday, June 3, 2013 2:19 PM
To: "paraview-developers at paraview.org<mailto:paraview-developers at paraview.org>" <paraview-developers at paraview.org<mailto:paraview-developers at paraview.org>>
Subject: [EXTERNAL] [Paraview-developers] DisplayCluster + ParaView

Colleagues have installed a large tiled display and are running displaycluster from TACC on it.
I've never used a tiled display, with paraview or any other software so I'm not sure what to do.

The out of the box solution is to use -tdx=4 -tdy=3 and set the display env var to point to the correct gpu, but I don't have access to the nodes hosting the wall itself (can't run 6 pvservers on the wall nodes or send X to them) and would like to use the viz cluster to run N pvservers (where N>number of tiles) and send the pixels using the displaycluster API ( see https://github.com/TACC/DisplayCluster ). It essentially allows you to send rectangular blocks of pixels into the final image, so I'd simply want 4x3 regions to send.

I'll therefore need to setup a custom view in paraview and capture pixels to send them. Since the display could conceivably be 1980x1020 X4 X3 pixels, this will mean some very large framebuffers on all nodes -

1)      Has anyone already integrated DisplayCluster into paraview and is there a paraview branch anywhere with developments in?

2)      I do not want to replicate data on each render node, which seems to be considered usual for tiled displays, but is out of the question as far as I'm concerned. Is there a better strategy for handling the display. Is 12 small windows better than 1 very large one for example - I can imagine setting 12 views with a different frustum and pasting them into the final large image, but then perhaps the overhead of managing 12 windows is worse than 1 huge one?) (NB. It's actually 6 windows instead of 12 because 2 displays are driven off each GPU on the wall driver nodes and the x server considers them as one)

Any advice gratefully received. Thanks

JB


--
John Biddiscombe,                        email:biddisco @.at.@ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre  | Tel:  +41 (91) 610.82.07
Via Trevano 131, 6900 Lugano, Switzerland   | Fax:  +41 (91) 610.82.82

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview-developers/attachments/20130606/1a97a894/attachment.htm>


More information about the Paraview-developers mailing list