[Paraview] capability of ParaView, Catalyst in distributed computing environment ...

Andy Bauer andy.bauer at kitware.com
Wed May 18 10:52:20 EDT 2016


Hi,

I'm a bit confused. MPI_COMM_WORLD is the global communicator and as far as
I'm aware, can't be modified which means there can't be two different
communicators.

Catalyst can be set to use a specific MPI communicator and that's been done
by at least one code (Code_Saturne). I think they have a multiphysics
simulation as well.

Cheers,
Andy

On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE) <
u.utku.turuncoglu at be.itu.edu.tr> wrote:

> Hi All,
>
> I just wonder about the capability of ParaView, Catalyst in distributed
> computing environment. I have little bit experience in in-situ
> visualization but it is hard for me to see the big picture at this point.
> So, i decided to ask to the user list to get some suggestion from the
> experts. Hypothetically, lets assume that we have two simulation code that
> are coupled together (i.e. fluid-structure interaction) and both of them
> have their own MPI_COMM_WORLD and run on different processors (model1 runs
> on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). What is the correct design
> to create integrated in-situ visualization analysis (both model contributes
> to same visualization pipeline) in this case? Do you know any
> implementation that is similar to this design? At least, is it possible?
>
> In this case, the adaptor code will need to access to two different
> MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its own
> MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation code
> have its own grid and field definitions (might be handled via defining
> different input ports). Does it create a problem in Paraview, Catalyst
> side, if the multiblock dataset is used to define the grids of the
> components in adaptor. I am asking because some MPI processes (belongs to
> adaptor code) will not have data for specific model component due to the
> domain decomposition implementation of the individual models. For example,
> MPI rank 4,5,6,7 will not have data for model1 (that runs on MPI rank
> 0,1,2,3) and 0,1,2,3 will not have data for model2 (that runs on MPI rank
> 4,5,6,7). To that end, do i need to collect all the data from the
> components? If this is the case, how can i handle 2d decomposition problem?
> Because, the adaptor code has no any common grid structure that fits for
> all the model components.
>
> Regards,
>
> Ufuk Turuncoglu
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20160518/f2a65fc8/attachment.html>


More information about the ParaView mailing list