<div dir="ltr"><div><div><div>Hi,<br><br></div><div>I'm a bit confused. MPI_COMM_WORLD is the global communicator and as far as I'm aware, can't be modified which means there can't be two different communicators.<br></div><div><br></div>Catalyst can be set to use a specific MPI communicator and that's been done by at least one code (Code_Saturne). I think they have a multiphysics simulation as well. <br><br></div>Cheers,<br></div>Andy<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE) <span dir="ltr"><<a href="mailto:u.utku.turuncoglu@be.itu.edu.tr" target="_blank">u.utku.turuncoglu@be.itu.edu.tr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All,<br>
<br>
I just wonder about the capability of ParaView, Catalyst in distributed computing environment. I have little bit experience in in-situ visualization but it is hard for me to see the big picture at this point. So, i decided to ask to the user list to get some suggestion from the experts. Hypothetically, lets assume that we have two simulation code that are coupled together (i.e. fluid-structure interaction) and both of them have their own MPI_COMM_WORLD and run on different processors (model1 runs on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). What is the correct design to create integrated in-situ visualization analysis (both model contributes to same visualization pipeline) in this case? Do you know any implementation that is similar to this design? At least, is it possible?<br>
<br>
In this case, the adaptor code will need to access to two different MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its own MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation code have its own grid and field definitions (might be handled via defining different input ports). Does it create a problem in Paraview, Catalyst side, if the multiblock dataset is used to define the grids of the components in adaptor. I am asking because some MPI processes (belongs to adaptor code) will not have data for specific model component due to the domain decomposition implementation of the individual models. For example, MPI rank 4,5,6,7 will not have data for model1 (that runs on MPI rank 0,1,2,3) and 0,1,2,3 will not have data for model2 (that runs on MPI rank 4,5,6,7). To that end, do i need to collect all the data from the components? If this is the case, how can i handle 2d decomposition problem? Because, the adaptor code has no any common grid structure that fits for all the model components.<br>
<br>
Regards,<br>
<br>
Ufuk Turuncoglu<br>
_______________________________________________<br>
Powered by <a href="http://www.kitware.com" rel="noreferrer" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" rel="noreferrer" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" rel="noreferrer" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
<br>
Search the list archives at: <a href="http://markmail.org/search/?q=ParaView" rel="noreferrer" target="_blank">http://markmail.org/search/?q=ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://public.kitware.com/mailman/listinfo/paraview" rel="noreferrer" target="_blank">http://public.kitware.com/mailman/listinfo/paraview</a><br>
</blockquote></div><br></div>