[Paraview] pvserver maxed out
David Thompson
dcthomp at sandia.gov
Tue Jun 16 17:16:42 EDT 2009
> When I try that (on my MacBook Pro w/ 2 procs):
> mpirun --mca mpi_yield_when_idle 1 -np 2 pvserver
>
> I still have one CPU pegged at 98%. Are you saying that shouldn't be
> the case?
It was my understanding that was what mpi_yield_when_idle was there to
fix -- since oversubscribed slots will otherwise have multiple processes
in busy-wait loops -- but I haven't fiddled with it in a while.
David
> On Jun 16, 2009, at 3:47 PM, David Thompson wrote:
>
> >
> >> I confirm that with mpich2 version 1.0.7, using the default device
> >> (ch3:sock), the load of an idle pvserver is near 0. With ch3:nemesis,
> >> all but one of the processes use a full cpu. With OpenMPI, I think
> >> all
> >> processes use a full cpu.
> >
> > This is configurable on the mpirun command line:
> >
> > http://www.open-mpi.org/faq/?category=running#force-aggressive-
> > degraded
> >
> > If you set the MCA parameter mpi_yield_when_idle to a non-zero value,
> > OpenMPI will not aggressively poll inside MPI routines. For example:
> >
> > mpirun --mca mpi_yield_when_idle 1 -np 4 pvserver
> >
> > David
> >
> >> In conclusion, if you are running pvserver on a Mac laptop, use
> >> mpich2
> >> with default build options.
> >>
> >> -berk
> >>
> >> On Mon, Jun 15, 2009 at 9:11 PM, Berk
> >> Geveci<berk.geveci at kitware.com> wrote:
> >>> For this reason, I use MPICH2 on my MacBook Pro and OpenMPI on my
> >>> Mac
> >>> Pro. I can't remember which device I ended up settling on - I think
> >>> the default. If I remember correctly, ch3:nemesis was causing the
> >>> same
> >>> issue as OpenMPI.
> >>>
> >>> -berk
> >>>
> >>> On Mon, Jun 15, 2009 at 1:44 PM, Randy
> >>> Heiland<heiland at indiana.edu> wrote:
> >>>> Thanks Utkarsh. Just to add to this thread, I asked some of the
> >>>> OMPI
> >>>> developers and apparently it's not possible to avoid this behavior.
> >>>> -Randy
> >>>> On Jun 15, 2009, at 11:35 AM, Utkarsh Ayachit wrote:
> >>>>
> >>>> Yup, that's expected. That's because of your MPI implementation.
> >>>> OpenMPI
> >>>> does a busy wait when waiting for mpi messages. I think there's an
> >>>> environment variable or something that you can set to disable
> >>>> that, I am no
> >>>> sure. Maybe someone on the mailing list knows, or try checking
> >>>> the OpenMPI
> >>>> documentation.
> >>>>
> >>>> Utkarsh
> >>>>
> >>>> On Mon, Jun 15, 2009 at 11:28 AM, Randy Heiland <heiland at indiana.edu
> >>>> > wrote:
> >>>>>
> >>>>> When I use 'mpirun -np 2 pvserver' on my dual core MacBook
> >>>>> (either
> >>>>> connecting to it via paraview client or standalone), it keeps
> >>>>> one of my CPUs
> >>>>> pegged at nearly 100%. Is this to be expected? (Just running
> >>>>> the serial
> >>>>> './pvserver' of course just generates 'Listening...Waiting for
> >>>>> client...'
> >>>>> with no CPU load).
> >>>>>
> >>>>> % ./pvserver --version
> >>>>> ParaView3.7
> >>>>>
> >>>>> % ompi_info
> >>>>> Open MPI: 1.3.1
> >>>>>
> >>>>> thanks, Randy
> >>>>> _______________________________________________
> >>>>> Powered by www.kitware.com
> >>>>>
> >>>>> Visit other Kitware open-source projects at
> >>>>> http://www.kitware.com/opensource/opensource.html
> >>>>>
> >>>>> Please keep messages on-topic and check the ParaView Wiki at:
> >>>>> http://paraview.org/Wiki/ParaView
> >>>>>
> >>>>> Follow this link to subscribe/unsubscribe:
> >>>>> http://www.paraview.org/mailman/listinfo/paraview
> >>>>
> >>>>
> >>>>
>
>
More information about the ParaView
mailing list