[Paraview] Using 64 or more cores on a single node when running client/server

Andy Bauer andy.bauer at kitware.com
Wed Sep 13 12:51:54 EDT 2017


If you're running on the DSRCs I'm guessing that the limit is coming from
the node selection option (i.e. "#PBS -l select=8:ncpus=36:mpiprocs=8").
mpiprocs here is the number of MPI processes per node while ncpus is the
number of cores per node (in this case only nodes that have 36 cores per
node will be used. I've never tried having mpiprocs higher than ncpus. I
would guess that a simple MPI helloworld would really show whether or not
this is where the limit is coming from.

Best,
Andy

On Wed, Sep 13, 2017 at 12:12 PM, Utkarsh Ayachit <
utkarsh.ayachit at kitware.com> wrote:

> > It should be related to your mpi environment: may be oversubscribing
> (more
> > than one mpi process by core) is not the default behavior.
>
> I am tempted to second that. There's nothing in ParaView that checks
> how many cores your node has, as a result if there's a cap, it's
> coming from the MPI implementation itself.
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at http://www.kitware.com/
> opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20170913/9223bbda/attachment.html>


More information about the ParaView mailing list