[Cdash] Database performance

Tim Gallagher tim.gallagher at gatech.edu
Mon Jan 30 15:21:25 UTC 2012


I'm not a DB person, or even a sys-admin, so I'm not entirely sure what you mean by checking indexes. 

But, I ran EXPLAIN on the query and got:

Nested Loop IN Join  (cost=22.91..34.98 rows=1 width=660)

then I reindexed the builderror table and reran explain and got:

 Nested Loop  (cost=22.79..26.83 rows=1 width=660)

So the total cost decreased, although not by much. This query was when it started taking ~1 minute to execute the query, if I let it run through all the tests, some queries take over an hour. So I don't know if dropping page calls by 8 makes a huge difference. But I don't really know much about how postgresql does it's magic.

We're in the process of moving the CDash server off of an important server to an older, dedicated machine where I can let postgresql run wild. I will try increasing memory limits when that is finished.

Thanks,

Tim

----- Original Message -----
From: "Julien Jomier" <julien.jomier at kitware.com>
To: "tim gallagher" <tim.gallagher at gatech.edu>
Cc: cdash at public.kitware.com
Sent: Monday, January 30, 2012 6:55:50 AM
Subject: Re: [Cdash] Database performance

Hi Tim,

Thanks for the report. Can you check the indexes on the 'builderror' table?

Also, it might help to allocate more memory to the PgSQL database.

Julien

On 28/01/2012 00:54, Tim Gallagher wrote:
> The offending SQL query is:
>
> UPDATE builderror SET newstatus=1 WHERE buildid=7033 AND type=1 AND crc32 IN
>                 (SELECT crc32 FROM (SELECT crc32 FROM builderror WHERE buildid=7033
>                 AND type=1) AS builderrora
>                 LEFT JOIN (SELECT crc32 as crc32b FROM builderror WHERE buildid=7025
>                 AND type=1) AS builderrorb ON builderrora.crc32=builderrorb.crc32b
>                 WHERE builderrorb.crc32b IS NULL)
>
> Tim
>
> ----- Original Message -----
> From: "Tim Gallagher"<tim.gallagher at gatech.edu>
> To: "tim gallagher"<tim.gallagher at gatech.edu>
> Cc: cdash at public.kitware.com, "Julien Jomier"<julien.jomier at kitware.com>
> Sent: Friday, January 27, 2012 6:12:28 PM
> Subject: Re: [Cdash] Database performance
>
> Okay, so I think there are some issues:
>
> I set it to do the asynchronous submissions like you suggested.
>
> The first two processes during the first submission say:
>
> 26846 	<IDLE>  in transaction
> 26851 UPDATE submission SET status=2, finished='2012-01-27 23:01:01', lastupdated='2012-01-27 23:01:01' WHERE id='2'
>
> Every new process that is started has LOCK TABLE submissionprocessor
>
> and nothing gets into the database.
>
> When I ran without asynchronous submissions, the CDash log has errors such as:
>
> (compute_error_difference): SQL error: ERROR: deadlock detected DETAIL: Process 27999 waits for ShareLock on transaction 5845759; blocked by process 22703. Process 22703 waits for ShareLock on transaction 5854110; blocked by process 27999.
>
> Tim
>
> ----- Original Message -----
> From: "Tim Gallagher"<tim.gallagher at gatech.edu>
> To: "Julien Jomier"<julien.jomier at kitware.com>
> Cc: cdash at public.kitware.com
> Sent: Friday, January 27, 2012 5:05:03 PM
> Subject: Re: [Cdash] Database performance
>
> Working on the phppgadmin part, but here are the table sizes:
>
> cdash=>  select count(*) from build;
>   count
> -------
>    6815
> (1 row)
>
> cdash=>  select count(*) from build2test;
>   count
> --------
>   255563
> (1 row)
>
> cdash=>  select count(*) from test;
>   count
> -------
>    7722
> (1 row)
>
> I'll check the admin as soon as I get it installed.
>
> Tim
>
> ----- Original Message -----
> From: "Julien Jomier"<julien.jomier at kitware.com>
> To: "tim gallagher"<tim.gallagher at gatech.edu>
> Cc: cdash at public.kitware.com
> Sent: Friday, January 27, 2012 4:15:52 PM
> Subject: Re: [Cdash] Database performance
>
> Hi Tim,
>
> If you install phppgadmin you should be able to monitor the Processes
> for the CDash database and it should display the SQL query that is
> taking a long time.
>
> You might also need to vacuum/reindex the database and see if that helps
> a little bit. How big (number of rows in the build, build2test and test
> tables) is your current database?
>
> Julien
>
> On 27/01/2012 20:47, Tim Gallagher wrote:
>> I'll try the asynchronous submission which will help the timeout issue, but the huge growth in insertions should still be there which is really the problem. Inserting one build report was taking 2.5 hours before I finally killed it.
>>
>> How can I get the queries? I was just watching the processes in top which shows "postgresql: cdash cdash 127.0.0.1 [processid] UPDATE". Is there some option to log queries?
>>
>> Tim
>>
>> ----- Original Message -----
>> From: "Julien Jomier"<julien.jomier at kitware.com>
>> To: "tim gallagher"<tim.gallagher at gatech.edu>
>> Cc: cdash at public.kitware.com
>> Sent: Friday, January 27, 2012 2:06:16 PM
>> Subject: Re: [Cdash] Database performance
>>
>> Hi Tim,
>>
>> I would recommend to enable asynchronous submissions in your
>> config.local.php:
>>
>>      $CDASH_ASYNCHRONOUS_SUBMISSION = true;
>>
>> This will let CDash receive the XML files and then process the
>> submission asynchronously, without having time-outs (hopefully). The
>> only minor drawback is that submissions are not instantly showing up on
>> the dashboard.
>>
>> Could you send us the queries that might be taking up some time and
>> we'll try to investigate.
>>
>> Julien
>>
>> On 27/01/2012 04:08, Tim Gallagher wrote:
>>> Hi,
>>>
>>> We have the following setup for tests in our code:
>>>
>>>        foreach(TESTCASE ${LESLIE_AVAILABLE_TESTCASES})
>>>          set(CTEST_CONFIGURE_COMMAND "./setup.py -t ${TESTCASE} '{${LESLIE_CONFIGURE_DICT_BASE}}'")
>>>          ctest_configure(BUILD "${CTEST_REPO_DIRECTORY}" APPEND)
>>>          set(CTEST_BUILD_COMMAND "./setup.py -t ${TESTCASE} '{${LESLIE_BUILD_DICT_BASE}}'")
>>>          ctest_build(BUILD "${CTEST_REPO_DIRECTORY}" APPEND)
>>>          ctest_submit(PARTS Configure Build)
>>>        endforeach()
>>>
>>> Each build process generates something like 20K of data based on the output from CTest. After about 10 test cases, the submissions begin to time-out with:
>>>
>>>       Error when uploading file: /data/CTestScripts/leslie-release/leslie-TestData/Testing/20120127-0230/Build.xml
>>>       Error message was: Operation too slow. Less than 1 bytes/sec transfered the last 120 seconds
>>>
>>> When looking at the processes running on the server during this time, postgresql is busy running UPDATE and it takes progressively longer each time a submission comes in. There are over 200 tests, so only making it through 10 is a problem.
>>>
>>> Are there any suggestions on how to improve the performance here? Is this a problem with postgresql or with the SQL statements CDash uses? Has anybody come across this?
>>>
>>> Tim
>>> _______________________________________________
>>> Cdash mailing list
>>> Cdash at public.kitware.com
>>> http://public.kitware.com/cgi-bin/mailman/listinfo/cdash
>>>
>>
> _______________________________________________
> Cdash mailing list
> Cdash at public.kitware.com
> http://public.kitware.com/cgi-bin/mailman/listinfo/cdash
>



More information about the CDash mailing list