[Paraview] d3 poor domain decomposition
Berk Geveci
berk.geveci at kitware.com
Sun May 12 16:43:00 EDT 2013
Hi Burlen,
This sounds like a good use case for handling multi-block and multi-piece
differently. The intent of multi-piece was to be a vehicle for parallelism
(or other reasons for partitioning data) whereas multi-block was to
represent intentional hierarchies. We haven't taken this to its logical
conclusion so we are a bit in limbo when it comes to differentiating the
two. I would encourage folks to think about this and move this
functionality forward where it makes sense. It is definitely on my to do
list to make progress on this within the next year or two.
In the case of D3, it should be doable to have D3 treat multi-piece
datasets as one thing and repartition the whole thing freely across MPI
ranks.
Best,
-berk
On Fri, May 10, 2013 at 6:13 PM, Burlen Loring <bloring at lbl.gov> wrote:
> Hi John,
>
> In hind sight I think d3 is doing something reasonable. you have quite a
> few use cases to support. when blocks are only a vehicle for parallelism,
> the block structure doesn't matter and could be ignored or done away with.
> when blocks describe physical structures, assemblies, parts of a machines
> and so on, you will have to retain the full heirarchy. perhaps a data
> partitioner could make a better decomposition if there was a way to let the
> user choose which level of structure he wants to retain and do the best it
> can within that constraint...
>
> In my case ParaView had saved unexpected sub-block partitions, 1 dataset
> per rank within each block, in the vtm file based on how many ranks were
> running at the time i extracted the surface (8 ranks x 8 blocks = 64 sub
> block partitions in the vtm file). But when that file is loaded in PV it
> shows no evidence of this, both composite index and process id correspond
> to the original 8 blocks. However d3 partitioned each of those sub-block
> partitions. in my case 4 ranks 64 sub blocks = 256 partitions in all!
> definitely not what I expected and I guess its a worst case scenario for
> d3. I don't care about sub block partitioning structure that PV used when I
> saved the data. But there's no way the writer, reader, or d3 could know
> that.
>
> Burlen
>
>
> On 05/10/2013 02:06 PM, Biddiscombe, John A. wrote:
>
> Burlen, Ken, List****
>
> * *
>
> *> *
>
> The D3 filter will partition each block independently. This means that
> each process will have a small region in each partition, which will be
> spread throughout the dataset.****
>
> <** **
>
> ** **
>
> This is my own experience too. ****
>
> ** **
>
> Question: What would you like to see in the output****
>
> ** **
>
> 1) Existing behaviour, each block is partitioned separately,
> previous multiblock structure is preserved****
>
> 2) All blocks partitioned as a single block, no multiblock structure
> in the output****
>
> 3) All blocks partitioned as a single block, multiblock structure
> from input regenerated based on a block Id assigned to each cell prior to
> partitioning.****
>
> ** **
>
> The reason I ask is because I’m working on a new partitioning class and I
> have not yet handled multi-block datasets and I wonder what ought to be
> done in this case. 3) seems ideal, but is a bit harder and might use some
> intermediate memory that would be undesireable.****
>
> ** **
>
> JB****
>
> ** **
>
>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130512/9c27b735/attachment.htm>
More information about the ParaView
mailing list