[Paraview] Multi-block distribution in parallel
John Biddiscombe
biddisco at cscs.ch
Wed Jul 16 14:10:24 EDT 2008
> I think the vtm format does what you want. It worked well with 4,096
> vti files.
I am using vtm files referencing vtu files, but it does not work (split
well) with vtu. I'm not keen on delving into the xml reader internals to
find how the block splitting is done, but maybe the vti are handled
better as they are structured. It shouldn't work with vti either since
at the block/dataset level the partitioning is the same. perhaps you did
this before a lot of recent collection/xml reader changes went in...I'm
still looking for a solution.
JB
>
> Kent
> Pittsburgh Supercomputing Center
>
>
> John Biddiscombe wrote:
>> When loading a multiblock dataset with 38 datasets from a series of
>> xml vtu files, paraview splits each dataset among all processors.
>> Since each block already represents a 'piece' of the data - is there
>> a way I can tell paraview to load one or more blocks per process,
>> rather than splitting all of them. The splitting is not such a big
>> problem, but the files are huge and the IO overhead of having all
>> blocks read by all processes is making the process much too slow to use.
>>
>> If not, is there an easy way of converting a vtmb file that looks
>> like this (below), into a parallel form, where the pieces are the
>> blocks.
>>
>> JB
--
John Biddiscombe, email:biddisco @ cscs.ch
http://www.cscs.ch/about/BJohn.php
CSCS, Swiss National Supercomputing Centre | Tel: +41 (91) 610.82.07
Via Cantonale, 6928 Manno, Switzerland | Fax: +41 (91) 610.82.82
More information about the ParaView
mailing list