[vtkusers] MPI Volume Rendering
Kevin H. Hobbs
kevin.hobbs.1 at ohiou.edu
Fri May 5 07:28:10 EDT 2006
On Fri, 2006-05-05 at 08:43 +0200, John Biddiscombe wrote:
> you are loading the data on one process, then using distributed data
> filter to send it to others, then render.
>
No, at least that is not my intent. The distributed data filter is
commented out right now. I'm not sure if I need it or not. The data are
prepositioned on the local hard disks of each node.
> Much better is to load the data in pieces on each processor, then render
> and composite afterwards. (I may have misread things because each
> processor could be receiving a different file name as argument)
>
They all get the same file just copied to each node.
> on each processor, load a piece, render it, the composite on the client
> or master node. If volume pieces are contiguous blocks then transparency
> will be handled by sorting the composition order...
>
Hmmm... it is a single piece file, I could split it up into streamed
pieces. I don't know if that helps though.
> anyway, I expect you know all this. sorry
>
> JB
>
At least for now the actual unstructured grid does not consume that much
RAM. It's something on the order of 100 MB. The real RAM usage _seems_
( at least according gkrellm while Paraview is running ) to happen
during the actual volume rendering. I should try putting some sort of
monitor on the cluster processes to see what's up...
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://www.vtk.org/pipermail/vtkusers/attachments/20060505/b6cc37ad/attachment.pgp>
More information about the vtkusers
mailing list