[Paraview] capability of ParaView, Catalyst in distributed computing environment ...
u.utku.turuncoglu at be.itu.edu.tr
u.utku.turuncoglu at be.itu.edu.tr
Sun May 22 07:59:41 EDT 2016
Thanks for the information. Currently, i am working on two component case
and the initial results show that grid and data information belong to each
model component must be accessible by all the MPI processes (defined in
global MPI_COMM_WORLD) in adaptor side. This makes the implementation very
complex when the 2d decomposition configuration of both model components
(which run in a specific subset of processors) are considered. In this
case, it seems that the easiest way is to interpolate/ redistribute the
data of both components into common grid or creating new 2d decomposition
in adaptor. Another possibility might be to implement MPI sections
specific for each model component (basically having two distinct
MPI_COMM_WORLD inside of global one) to access grid and fields in the
adaptor side but in this case i am not sure ParaView could handle these
kind of information or not. Anyway, it seems that it is a challanging
problem and probably it would be good to have this feature. I'll keep to
continue to try different implementations to test different ideas and keep
you posted about it. In the mean time, if you have any other idea, let me
know.
Regards,
--ufuk
> It may be possible to do this with Catalyst. I would guess that nearly all
> of the complex work would need to be done in the adaptor to integrate this
> properly though.
>
> On Wed, May 18, 2016 at 11:17 AM, <u.utku.turuncoglu at be.itu.edu.tr> wrote:
>
>> Yes, you are right. In this case, there will be two separate
>> MPI_COMM_WORLD. Plus, one that covers all the resources (let's say that
>> global MPI_COMM_WORLD). Actually, this kind of setup is very common for
>> multi-physics applications such as fluid-structure interaction. So, is
>> it
>> possible to tight this kind of environment with Catalyst? I am not
>> expert
>> about Catalyst but it seems that there might be a problem in the
>> rendering
>> stage even underlying grids and fields are defined without any problem.
>>
>> Regards,
>>
>> --ufuk
>>
>> > I'm not sure if this is exactly what the original user is referring
>> to,
>> > but it is possible to have two separate codes communicate using MPI
>> > through the dynamic processes in MPI-2. Essentially, one program
>> starts
>> up
>> > on N processors and begins running and gets an MPI_COMM_WORLD. It then
>> > spawns another executable on M different processors and that new
>> > executable will call MPI_INIT and also get its own MPI_COMM_WORLD. So
>> you
>> > have two, disjoint MPI_COMM_WORLD's that get linked together through a
>> > newly created intercommunicator.
>> >
>> >
>> > I've used this to couple a structural mechanics code to our fluid
>> dynamics
>> > solver for example. It sounds like that is similar to what is being
>> done
>> > here.
>> >
>> >
>> > How that would interact with coprocessing is beyond my knowledge
>> though.
>> > It does sound like an interesting problem and one I would be very
>> curious
>> > to find out the details.
>> >
>> >
>> > Tim
>> >
>> >
>> > ________________________________
>> > From: ParaView <paraview-bounces at paraview.org> on behalf of Andy Bauer
>> > <andy.bauer at kitware.com>
>> > Sent: Wednesday, May 18, 2016 10:52 AM
>> > To: Ufuk Utku Turuncoglu (BE)
>> > Cc: paraview at paraview.org
>> > Subject: Re: [Paraview] capability of ParaView, Catalyst in
>> distributed
>> > computing environment ...
>> >
>> > Hi,
>> >
>> > I'm a bit confused. MPI_COMM_WORLD is the global communicator and as
>> far
>> > as I'm aware, can't be modified which means there can't be two
>> different
>> > communicators.
>> >
>> > Catalyst can be set to use a specific MPI communicator and that's been
>> > done by at least one code (Code_Saturne). I think they have a
>> multiphysics
>> > simulation as well.
>> >
>> > Cheers,
>> > Andy
>> >
>> > On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE)
>> > <u.utku.turuncoglu at be.itu.edu.tr<mailto:u.utku.turuncoglu at be.itu.edu.tr
>> >>
>> > wrote:
>> > Hi All,
>> >
>> > I just wonder about the capability of ParaView, Catalyst in
>> distributed
>> > computing environment. I have little bit experience in in-situ
>> > visualization but it is hard for me to see the big picture at this
>> point.
>> > So, i decided to ask to the user list to get some suggestion from the
>> > experts. Hypothetically, lets assume that we have two simulation code
>> that
>> > are coupled together (i.e. fluid-structure interaction) and both of
>> them
>> > have their own MPI_COMM_WORLD and run on different processors (model1
>> runs
>> > on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). What is the correct
>> > design to create integrated in-situ visualization analysis (both model
>> > contributes to same visualization pipeline) in this case? Do you know
>> any
>> > implementation that is similar to this design? At least, is it
>> possible?
>> >
>> > In this case, the adaptor code will need to access to two different
>> > MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its
>> own
>> > MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation
>> code
>> > have its own grid and field definitions (might be handled via defining
>> > different input ports). Does it create a problem in Paraview, Catalyst
>> > side, if the multiblock dataset is used to define the grids of the
>> > components in adaptor. I am asking because some MPI processes (belongs
>> to
>> > adaptor code) will not have data for specific model component due to
>> the
>> > domain decomposition implementation of the individual models. For
>> example,
>> > MPI rank 4,5,6,7 will not have data for model1 (that runs on MPI rank
>> > 0,1,2,3) and 0,1,2,3 will not have data for model2 (that runs on MPI
>> rank
>> > 4,5,6,7). To that end, do i need to collect all the data from the
>> > components? If this is the case, how can i handle 2d decomposition
>> > problem? Because, the adaptor code has no any common grid structure
>> that
>> > fits for all the model components.
>> >
>> > Regards,
>> >
>> > Ufuk Turuncoglu
>> > _______________________________________________
>> > Powered by www.kitware.com<http://www.kitware.com>
>> >
>> > Visit other Kitware open-source projects at
>> > http://www.kitware.com/opensource/opensource.html
>> >
>> > Please keep messages on-topic and check the ParaView Wiki at:
>> > http://paraview.org/Wiki/ParaView
>> >
>> > Search the list archives at: http://markmail.org/search/?q=ParaView
>> >
>> > Follow this link to subscribe/unsubscribe:
>> > http://public.kitware.com/mailman/listinfo/paraview
>> >
>> >
>>
>>
>
More information about the ParaView
mailing list