[Paraview] memory load on client when running on server
jean mensa
jmensa at rsmas.miami.edu
Thu Jun 19 21:25:28 EDT 2014
The dataset is 10km by 4km by 50 meters. It's a very shallow slab. Also,
the model mesh is discontinuous in some fields (Velocity, Temperature,
etc). Some fields are project to a continuous mesh though: Velocity_CG and
Temperature_CG for example. You might want to try to open one of those
first. In that case the size should about a fourth of that of the
discontinuous mesh.
A part from the problems with the dataset, I still don't understand why the
load should be on the client... As a note, I haven't personally installed
PV on the server so I don't know if it is correct. The client works fine it
seems but I am not sure if server support is correctly installed. Is there
a test I can run to verify the installation? In case, can I just use the
binary version on the server side?
thanks,
j
On Thu, Jun 19, 2014 at 8:18 PM, Burlen Loring <bloring at lbl.gov> wrote:
> Hi Jean,
>
> PV is not liking this dataset!
>
> I reproduce the crash, a std::bad_alloc exception that occurs in
> vtkDataSetSurfaceFilter::NewFastGeomQuad(int). This filter is choking on
> the data, not sure why.
>
> I tried it on a Cray and PV used about 32 G just to open one scalar array!
>
> I see a bunch of rendering artifacts that usually are a result of
> coincident geometry and go away when I zoom in. I notice that the data is
> much larger in x and y than z.
> Perhaps it's a rendering precision issue? Not sure if the surface filter
> could suffer from a precision issue. Any one out there know precision
> related caveats?
>
> Do you have overlapping/coincident geometry? Maybe ghost cells? It looked
> like you did. You may want to try to remove these, it may help. I couldn't
> find any other obvious bugs in your dataset.
>
> Burlen
>
>
> rocky:/work/ParaView/ParaView-4.1.0-Linux-64bit/lib/paraview-4.1$LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
> ./pvserver --enable-bt
> Waiting for client...
> Connection URL: cs://rocky.dhcp:11111
> Accepting connection(s): rocky.dhcp:11111
> Client connected.
> terminate called after throwing an instance of 'std::bad_alloc'
> what(): std::bad_alloc
>
> =========================================================
> Process id 15417 Caught SIGABRT
> Program Stack:
> WARNING: The stack trace will not use advanced capabilities because this
> is a release build.
> 0x3769a0ef90 : ??? [(???) ???:-1]
> 0x37696359e9 : gsignal [(libc.so.6) ???:-1]
> 0x37696370f8 : abort [(libc.so.6) ???:-1]
> 0x376c660565 : __gnu_cxx::__verbose_terminate_handler() [(libstdc++.so.6)
> ???:-1]
> 0x376c65e6c6 : ??? [(???) ???:-1]
> 0x376c65e6f3 : ??? [(???) ???:-1]
> 0x376c65e91f : ??? [(???) ???:-1]
> 0x376c65ee2d : operator new(unsigned long) [(libstdc++.so.6) ???:-1]
> 0x376c65eec9 : operator new[](unsigned long) [(libstdc++.so.6) ???:-1]
> 0x7fc327d46712 : vtkDataSetSurfaceFilter::NewFastGeomQuad(int)
> [(libvtkFiltersGeometry-pv4.1.so.1) ???:-1]
> 0x7fc327d46aca : vtkDataSetSurfaceFilter::InsertTriInHash(long long, long
> long, long long, long long, long long) [(libvtkFiltersGeometry-pv4.1.so.1)
> ???:-1]
> 0x7fc327d4b5c0 :
> vtkDataSetSurfaceFilter::UnstructuredGridExecute(vtkDataSet*, vtkPolyData*,
> int) [(libvtkFiltersGeometry-pv4.1.so.1) ???:-1]
> 0x7fc3212874a6 :
> vtkPVGeometryFilter::UnstructuredGridExecute(vtkUnstructuredGridBase*,
> vtkPolyData*, int) [(libvtkPVVTKExtensionsRendering-pv4.1.so.1) ???:-1]
> 0x7fc321287cf9 : vtkPVGeometryFilter::RequestData(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkPVVTKExtensionsRendering-pv4.1.so.1) ???:-1]
> 0x7fc3265ca744 : vtkExecutive::CallAlgorithm(vtkInformation*, int,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c6c5c : vtkDemandDrivenPipeline::ExecuteData(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c56a1 : vtkCompositeDataPipeline::ExecuteData(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c9613 : vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265e1a59 :
> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c3737 :
> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c95bc : vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265e1a59 :
> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c3737 :
> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c95bc : vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265e1a59 :
> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c8b6e : vtkDemandDrivenPipeline::UpdateData(int)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265e36c5 : vtkStreamingDemandDrivenPipeline::Update(int)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc32256db2a : vtkGeometryRepresentation::RequestData(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc3265ca744 : vtkExecutive::CallAlgorithm(vtkInformation*, int,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c6c5c : vtkDemandDrivenPipeline::ExecuteData(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c56a1 : vtkCompositeDataPipeline::ExecuteData(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c9613 : vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265e1a59 :
> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
> vtkInformationVector**, vtkInformationVector*)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265c8b6e : vtkDemandDrivenPipeline::UpdateData(int)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc3265e36c5 : vtkStreamingDemandDrivenPipeline::Update(int)
> [(libvtkCommonExecutionModel-pv4.1.so.1) ???:-1]
> 0x7fc322581c8f :
> vtkPVDataRepresentation::ProcessViewRequest(vtkInformationRequestKey*,
> vtkInformation*, vtkInformation*)
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc32256d699 :
> vtkGeometryRepresentation::ProcessViewRequest(vtkInformationRequestKey*,
> vtkInformation*, vtkInformation*)
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc32256f781 :
> vtkGeometryRepresentationWithFaces::ProcessViewRequest(vtkInformationRequestKey*,
> vtkInformation*, vtkInformation*)
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc3225a5508 :
> vtkPVView::CallProcessViewRequest(vtkInformationRequestKey*,
> vtkInformation*, vtkInformationVector*)
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc3225a56d2 : vtkPVView::Update()
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc3225964fa : vtkPVRenderView::Update()
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc3225959b1 : vtkPVRenderView::ResetCamera()
> [(libvtkPVClientServerCoreRendering-pv4.1.so.1) ???:-1]
> 0x7fc329b5770f : vtkPVRenderViewCommand(vtkClientServerInterpreter*,
> vtkObjectBase*, char const*, vtkClientServerStream const&,
> vtkClientServerStream&, void*)
> [(libvtkPVServerManagerApplication-pv4.1.so.1) ???:-1]
> 0x7fc3271ed250 : vtkClientServerInterpreter::CallCommandFunction(char
> const*, vtkObjectBase*, char const*, vtkClientServerStream const&,
> vtkClientServerStream&) [(libvtkClientServer-pv4.1.so.1) ???:-1]
> 0x7fc3271f2183 :
> vtkClientServerInterpreter::ProcessCommandInvoke(vtkClientServerStream
> const&, int) [(libvtkClientServer-pv4.1.so.1) ???:-1]
> 0x7fc3271f0ef2 :
> vtkClientServerInterpreter::ProcessOneMessage(vtkClientServerStream const&,
> int) [(libvtkClientServer-pv4.1.so.1) ???:-1]
> 0x7fc3271f13ad :
> vtkClientServerInterpreter::ProcessStream(vtkClientServerStream const&)
> [(libvtkClientServer-pv4.1.so.1) ???:-1]
> 0x7fc328e98854 : vtkSIProperty::ProcessMessage(vtkClientServerStream&)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc328e988fe : vtkSIProperty::Push(paraview_protobuf::Message*, int)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc328e99550 : vtkSIProxy::Push(paraview_protobuf::Message*)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc328e7e32a :
> vtkPVSessionCore::PushStateInternal(paraview_protobuf::Message*)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc328e7b3a4 : vtkPVSessionCore::PushState(paraview_protobuf::Message*)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc328e79fad : vtkPVSessionBase::PushState(paraview_protobuf::Message*)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc328e866dc : vtkPVSessionServer::OnClientServerMessageRMI(void*, int)
> [(libvtkPVServerImplementationCore-pv4.1.so.1) ???:-1]
> 0x7fc326a2c163 : vtkMultiProcessController::ProcessRMI(int, void*, int,
> int) [(libvtkParallelCore-pv4.1.so.1) ???:-1]
> 0x7fc326a2c37b : vtkMultiProcessController::ProcessRMIs(int, int)
> [(libvtkParallelCore-pv4.1.so.1) ???:-1]
> 0x7fc328cf7b46 :
> vtkTCPNetworkAccessManager::ProcessEventsInternal(unsigned long, bool)
> [(libvtkPVClientServerCoreCore-pv4.1.so.1) ???:-1]
> 0x4019a6 : __gxx_personality_v0 [(pvserver) ???:-1]
> 0x4019ee : main [(pvserver) ???:-1]
> 0x3769621b45 : __libc_start_main [(libc.so.6) ???:-1]
> 0x4016aa : __gxx_personality_v0 [(pvserver) ???:-1]
> =========================================================
>
>
>
>
> On 06/19/2014 03:05 PM, jean mensa wrote:
>
> I am running the 4.0.1, I will upgrade to 4.1. In the meantime it follows
> the dataset (~1GB).
>
> https://drive.google.com/file/d/0B2rIXNfrsOf8M054bzFNSmo3UU0/edit?usp=sharing
> Thanks for helping,
> j
>
>
> On Thu, Jun 19, 2014 at 5:42 PM, Burlen Loring <bloring at lbl.gov> wrote:
>
>> Could you share the dataset? Maybe I can reproduce the issue.
>>
>> Which version of PV are you using? You're not using the latest(4.1) if
>> --enable-bt isn't there. Upgrading to 4.1 may help...
>>
>>
>> On 06/19/2014 02:10 PM, jean mensa wrote:
>>
>> The server has about 16GB of ram, while the client has 8GB. I am loading
>> only one array and I am starting the server with a simple 'pvserver' then I
>> establish the connection manually from PV which I start simply calling
>> 'paraview'.
>>
>> --enable-bt is not a valid option but dmesg does show a segfault: at-spi-bus-laun[17239]:
>> segfault at 968 ip 0000003e4e425321 sp 00007fff9770a040 error 4 in
>> libX11.so.6.3.0
>> I don't think that this can be the problem though because the same
>> excessive memory load happens on the client mac with no crash and a
>> different version of X (XQuartz).
>>
>> in the case of remote rendering I should see a black window coming from
>> pvserver where the dataset should be displayed, right? well I see the
>> window but nothing gets rendered on it... if remote rendering is disabled
>> do I still see the window?
>> j
>>
>>
>> On Thu, Jun 19, 2014 at 4:43 PM, Burlen Loring <burlen.loring at gmail.com>
>> wrote:
>>
>>> questions:
>>> So how much ram is on this system?
>>> How many arrays are you loading? If memory is an issue you may get by by
>>> loading only one (or fewer) of them.
>>> What command line are you using to start the server?
>>>
>>> Can you start the client and server with --enable-bt on the respective
>>> command lines? This should print a stack trace during the crash. If it does
>>> send it to the list. If PV is killed because its out of memory this may
>>> have been logged (on linux: dmesg | egrep -i 'killed process').
>>>
>>> about your original question: When in client server(not multicore) with
>>> the server running on a remote system and the client on your desktop, the
>>> client only has to display images rendered on the server. so very low load
>>> on the client side. If you're seeing a high load in that case , remote
>>> rendering is probably disabled, perhaps this is a bug.
>>>
>>> multicore would disable remote rendering , according to a recent post on
>>> the mail list.
>>>
>>> mulitcore the client and servers run on the same system and thus the X
>>> server load could in fact be coming from the server rather than the client.
>>>
>>>
>>> On 06/19/2014 01:26 PM, jean mensa wrote:
>>>
>>> it's the crash. The server loads about 8GB of data and then it idles.
>>> The status bar also shows that the vtkUnstructuredGridReader is loading the
>>> dataset and it reaches 100% without giving any errors. The problem seems to
>>> be displaying the data...
>>>
>>>
>>> On Thu, Jun 19, 2014 at 4:17 PM, Burlen Loring <bloring at lbl.gov> wrote:
>>> >
>>> > The server seems to load the dataset properly but no image is shown on
>>> the client.
>>> >
>>> > OK, so is this the crash or a new issue? If there's no image how could
>>> you tell the server loaded the dataset correctly?
>>> >
>>> >
>>> >
>>> > On 06/19/2014 12:53 PM, jean mensa wrote:
>>> >
>>> > same result. I disabled multicore and I was already using 0MB remote
>>> rendering threshold. I have tried with paraview on both linux and macosx.
>>> The server seems to load the dataset properly but no image is shown on the
>>> client.
>>> >
>>> >
>>> >
>>> > On Thu, Jun 19, 2014 at 3:40 PM, Burlen Loring <bloring at lbl.gov>
>>> wrote:
>>> >>
>>> >> Ug. you're using the multicore option to start the servers! I believe
>>> that this forces remote rendering off. That would explain your issues.
>>> Could you try duisabling the multi core option and start your servers
>>> manually?
>>> >>
>>> >>
>>> >> On 06/19/2014 12:29 PM, Burlen Loring wrote:
>>> >>
>>> >> OK, one possibility is that it may be related to remote rendering
>>> settings. Under menu Edit->Settings->Render View->Server there is a remote
>>> render threshold. I always set this to 0 bytes to ensure remote parallel
>>> rendering. What's yours set to?
>>> >>
>>> >> On 06/19/2014 12:18 PM, jean mensa wrote:
>>> >>
>>> >> Sorry, I didn't explain it properly. I have already tried to connect
>>> to a pvserver from a GUI running on the client (without an ssh connection
>>> then) but it also drains the memory on Xorg. What is the load supposed to
>>> be on a client-pvserver connection?
>>> >> Thanks,
>>> >>
>>> >>
>>> >> On Thu, Jun 19, 2014 at 3:08 PM, Burlen Loring <bloring at lbl.gov>
>>> wrote:
>>> >>>
>>> >>> Jean,
>>> >>>
>>> >>> What you describe is expected. X forwarding doesn't work well for
>>> visualizing large datasets. The point is : don't use X forwarding. The
>>> links show how to configure the client server connection without X
>>> forwarding. Make sure that there is no -X on the ssh command line.
>>> >>>
>>> >>> Burlen
>>> >>>
>>> >>>
>>> >>> On 06/19/2014 11:46 AM, jean mensa wrote:
>>> >>>
>>> >>> When I do that Xorg process on the client drains the memory up until
>>> it crashes. The connection I think is properly established since it works
>>> fine for smaller datasets. What should the load on the client be in that
>>> case?
>>> >>> j
>>> >>>
>>> >>>
>>> >>> On Thu, Jun 19, 2014 at 2:14 PM, Burlen Loring <bloring at lbl.gov>
>>> wrote:
>>> >>>>
>>> >>>> Yes, this is expected. The OpenGL calls get piped to your local
>>> system. This includes data like vertices and colors as well. X forwarding
>>> doesn't give good performance and with large data it may not even be
>>> feasible.
>>> >>>>
>>> >>>> You need to set up client server connection.
>>> >>>>
>>> >>>> http://www.paraview.org/Wiki/Reverse_connection_and_port_forwarding
>>> >>>> http://www.paraview.org/Wiki/ParaView:Server_Configuration
>>> >>>> http://www.paraview.org/Wiki/Setting_up_a_ParaView_Server
>>> >>>> http://www.paraview.org/Wiki/ParaView_And_Mesa_3D
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> On 06/19/2014 10:33 AM, jean mensa wrote:
>>> >>>>
>>> >>>> Hi all,
>>> >>>> I have a rather large unstructured pvtu dataset (about 4 million
>>> points) which I am visualizing on a remote server. I am running the
>>> paraview GUI (with the multicore option) on the server and I connect to it
>>> via ssh sharing the X server. In this way I can control the GUI running on
>>> the serve. I thought that with this configuration I would only receive a
>>> screenshot of the window and no actual data from paraview but the client
>>> seems to share at least some of the load of the visualization. Is this the
>>> expected behaviour? How can I avoid loading data on the client side?
>>> >>>> Thanks in advance,
>>> >>>>
>>> >>>> --
>>> >>>> Jean A. Mensa
>>> >>>> Graduate Assistant
>>> >>>> University of Miami
>>> >>>> RSMAS - MPO
>>> >>>> 4600 Rickenbacker Causeway
>>> >>>> Miami, FL 33149-1098
>>> >>>>
>>> >>>>
>>> >>>> _______________________________________________
>>> >>>> Powered by www.kitware.com
>>> >>>>
>>> >>>> Visit other Kitware open-source projects at
>>> http://www.kitware.com/opensource/opensource.html
>>> >>>>
>>> >>>> Please keep messages on-topic and check the ParaView Wiki at:
>>> http://paraview.org/Wiki/ParaView
>>> >>>>
>>> >>>> Follow this link to subscribe/unsubscribe:
>>> >>>> http://public.kitware.com/mailman/listinfo/paraview
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Jean A. Mensa
>>> >>> Graduate Assistant
>>> >>> University of Miami
>>> >>> RSMAS - MPO
>>> >>> 4600 Rickenbacker Causeway
>>> >>> Miami, FL 33149-1098
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Jean A. Mensa
>>> >> Graduate Assistant
>>> >> University of Miami
>>> >> RSMAS - MPO
>>> >> 4600 Rickenbacker Causeway
>>> >> Miami, FL 33149-1098
>>> >>
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Jean A. Mensa
>>> > Graduate Assistant
>>> > University of Miami
>>> > RSMAS - MPO
>>> > 4600 Rickenbacker Causeway
>>> > Miami, FL 33149-1098
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Jean A. Mensa
>>> Graduate Assistant
>>> University of Miami
>>> RSMAS - MPO
>>> 4600 Rickenbacker Causeway
>>> Miami, FL 33149-1098
>>>
>>>
>>> _______________________________________________
>>> Powered by www.kitware.com
>>>
>>> Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html
>>>
>>> Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView
>>>
>>> Follow this link to subscribe/unsubscribe:http://public.kitware.com/mailman/listinfo/paraview
>>>
>>>
>>>
>>
>>
>> --
>> *Jean A. Mensa*
>> *Graduate Assistant*
>> University of Miami
>> RSMAS - MPO
>> 4600 Rickenbacker Causeway
>> Miami, FL 33149-1098
>>
>>
>>
>
>
> --
> *Jean A. Mensa*
> *Graduate Assistant*
> University of Miami
> RSMAS - MPO
> 4600 Rickenbacker Causeway
> Miami, FL 33149-1098
>
>
>
--
*Jean A. Mensa*
*Graduate Assistant*
University of Miami
RSMAS - MPO
4600 Rickenbacker Causeway
Miami, FL 33149-1098
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20140619/3183af85/attachment-0001.html>
More information about the ParaView
mailing list