[Paraview] Ghost data in parallel formats

Renato Elias rnelias at gmail.com
Thu Mar 3 12:15:45 EST 2011


Hi Berk,

I thought D3 wouldn't repartition the data but it was my mistake. I did some
tests here and D3 worked properly, however, I was expecting that it would be
possible to remove the internal surfaces, produced by D3, with "clean to
grid" filter. I also tried to play with the "Boundary Mode" options with no
success. The internal faces are still there.

Some further comments:

I used a Xdmf reader in a temporal collection (12 time steps) of spatial
collections (4 partitions) in a small unstructured mesh. The reader loads
everything as an unique multiblock. It'll surely blows out the memory for a
larger number of time steps and/or spatial partitions (I'm copying this
message to xdmf list).

ParaView 3.10.0-RC1 does not seem robust (yet) :-(

it crashes all of a sudden and stop responding in random actions (I'll try
to list some scenarios and let my dataset available to reproduce the
errors).

The client machine is a Windows 7 x64 connected to an offscreen MPI session
with a Altix-ICE server (Linux x86_64).

[]'s

Renato.

On Wed, Mar 2, 2011 at 12:58 PM, Berk Geveci <berk.geveci at kitware.com>wrote:

> I am not sure that I follow. The output of D3 should be a dataset
> re-partitioned to be load balanced whether the input is distributed or
> not. Are you saying that the output of D3 is different based on
> whether the input is distributed?
>
> As expected, D3 will probably produce a different partitioning than
> the input. But that shouldn't be a problem, right?
>
> On Wed, Mar 2, 2011 at 7:54 AM, Renato Elias <rnelias at gmail.com> wrote:
> > Hi Berk,
> > I already did such test. It really works but the dataset must be serial
> and
> > loaded in a parallel session. In this case, D3 will take care of the data
> > distribution, load balance and ghost information. However, if the dataset
> > read is already partitioned, D3 only creates a new partition and let the
> > original distribution (in my case, performed by Metis) untouched. D3 is
> not
> > able to (re)partition unstructured grids (at least in my tests... or I'm
> > doing something wrong).
> > []'s
> > Renato.
> >
> > On Tue, Mar 1, 2011 at 1:09 PM, Berk Geveci <berk.geveci at kitware.com>
> wrote:
> >>
> >> Great. Ping me in a 2-3 months - we should have started making changes
> >> to the ghost level stuff by then. Until then, you should be able to
> >> use D3 to redistribute data and generate (cell) ghost levels as
> >> needed. So the following should work
> >>
> >> reader -> D3 -> extract surface
> >>
> >> -berk
> >>
> >> On Tue, Mar 1, 2011 at 10:57 AM, Renato Elias <rnelias at gmail.com>
> wrote:
> >> >> Berk: Do you have the ability to mark a node as "owned" by one
> >> >> partition
> >> >> and as "ghost" on
> >> >> other partitions?
> >> > Yes! We classify processes as master and slaves according to their
> >> > ranking
> >> > numbers. After this we can assign the shared node to a master (which
> >> > will
> >> > take care of shared computations) and tell the slave process that this
> >> > node
> >> > is being shared with a master (and the slave process will consider it
> as
> >> > a
> >> > "ghost" for computations).
> >> > The ideas we used in our parallel solver were taken from the following
> >> > article:
> >> > Karanam, A. K., Jansen, K. E. and Whitinig, C. H., "Geometry Based
> >> > Pre-processor for Parallel Fluid Dynamic Simulations Using a
> >> > Hierarchical
> >> > Basis", Engineering with Computers (24):17-26, 2008.
> >> > This article was made available from the author
> >> > in http://www.scorec.rpi.edu/REPORTS/2007-3.pdf (Figures 7 to 9
> explain
> >> > the
> >> > communication method between processes),
> >> > Regards
> >> > Renato.
> >> >
> >> > On Tue, Mar 1, 2011 at 12:24 PM, Berk Geveci <berk.geveci at kitware.com
> >
> >> > wrote:
> >> >>
> >> >> Hi Renato,
> >> >>
> >> >> > I think I'm missing something.... you said cells only?!
> >> >> > If I understood this subject correctly, a cell should be considered
> >> >> > ghost if
> >> >> > it's held by more than one partition/process, isn't it?! In this
> >> >> > case,
> >> >> > there'll be an overlapped layer of elements. The problem is that my
> >> >> > MPI
> >> >> > solver does not make use of this overlapped layer of
> cells/elements.
> >> >>
> >> >> Yep. You understood correctly. Ghost cells are very common for finite
> >> >> difference calculations but not as common for finite elements.
> >> >>
> >> >> > It only
> >> >> > has nodes/points that are shared by processes. This explains why I
> >> >> > asked
> >> >> > by
> >> >> > a ghost node (shared node would be more appropriated to define such
> >> >> > kind
> >> >> > of
> >> >> > node).
> >> >> > Can I consider a cell as ghost if it touches the parallel interface
> >> >> > (without
> >> >> > overlapping)? Would it work?
> >> >>
> >> >> Nope. Then you'd start seeing gaps. The right thing to do is for
> >> >> ParaView to support ghost points (nodes) better. However, this is
> >> >> non-trivial in some cases. For removing internal interfaces, it is
> >> >> sufficient to mark points as ghosts. However, for accurately
> >> >> performing statistics, you need to make sure that you count all
> points
> >> >> only once, which requires assigning ghost nodes to processes. So a
> >> >> replicated node would be marked as ghost (a better word is shared)
> and
> >> >> also owned by a particular process. We are going to improve VTK's
> >> >> ghost level support. This is something we'll support. However, it
> will
> >> >> be up to the simulation to produce the write output. Do you have the
> >> >> ability to mark a node as "owned" by one partition and as "ghost" on
> >> >> other partitions?
> >> >>
> >> >> Best,
> >> >> -berk
> >> >
> >> >
> >> >
> >> > --
> >> > Renato N. Elias
> >> > =============================================
> >> > Professor at Technology and Multilanguage Department (DTL)
> >> > Federal Rural University of Rio de Janeiro (UFRRJ)
> >> > Nova Iguaçu, RJ - Brazil
> >> > =============================================
> >> > Researcher at High Performance Computing Center (NACAD)
> >> > Federal University of Rio de Janeiro (UFRJ)
> >> > Rio de Janeiro, Brazil
> >> >
> >> >
> >
> >
> >
> > --
> > Renato N. Elias
> > =============================================
> > Professor at Technology and Multilanguage Department (DTL)
> > Federal Rural University of Rio de Janeiro (UFRRJ)
> > Nova Iguaçu, RJ - Brazil
> > =============================================
> > Researcher at High Performance Computing Center (NACAD)
> > Federal University of Rio de Janeiro (UFRJ)
> > Rio de Janeiro, Brazil
> >
> >
>



-- 
Renato N. Elias
=============================================
Professor at Technology and Multilanguage Department (DTL)
Federal Rural University of Rio de Janeiro (UFRRJ)
Nova Iguaçu, RJ - Brazil

=============================================
Researcher at High Performance Computing Center (NACAD)
Federal University of Rio de Janeiro (UFRJ)
Rio de Janeiro, Brazil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20110303/fc6b021c/attachment.htm>


More information about the ParaView mailing list