[Cdash] Report of failed time status

Olivier Pierard olivier.pierard at cenaero.be
Fri Jul 1 08:14:30 UTC 2011

Thanks for these precisions Julien.

But... I still need some precisions.

1°) Why not considering CPU times instead of Wall-clock times ?  Is it
really difficult to implement (maybe more related to CTest than CDash) ?

2°) Let's consider we have a 'Test time # max failures before flag =
3'.  On the third day with higher test times, are the two first ones
with test time failures considered in the average ?  I hope not because
with a coefficient of 0.3 in the average, times before three days ago
are almost negligible.  I already noticed that test failures not due to
time are not taken into account into the average.

3°)  From documentation on vtk website:

A test is defined as failing if it verifies the following: if previousSD
< thresholdSD then previousSD = thresholdSD.

 if  currentTime > previousMean+multiplier*previousSD.

In my case, with following parameters:
Test time SD (coefficient): 4.0
Test time SD threshold : 1
Test time # max failure before flag : 1

And for a given test (reported on CDash - testcase report - exucution
time(s) line):
- mean:31.29
- std:2.72
- Execution time : 36.26

=> previous SD > threshold => threshold not taken into account
=> previous mean + multiplier*previous SD = 31.29 + 4*2.72 = 42.17 > 36.26

==> Test time should be OK but is reported as failed and flagged on main
CDash page !??
Maybe reported mean and std  are not the previous ones but current
ones.  If I go on the previous report, it's reported mean 29.16 -
std:0.0.  Maybe this previous 0.0 is used.  But clearly on test times
graph, there is a standard deviation (oscillates between 29 and 41
during last month).

4°) html link in project configuration in testing tab - in description
of 'test standard deviation' and 'test standard deviation threshold',
link to test timing description on the WIKI is wrong; maybe due to the
fact that I'm using CDash 1.6.2

Thank you for these additional precisions.

> Hi Olivier,
>> On main dashboard page of the project, only failed test status are
>> reported and never due to failed time status.  Given the high number of
>> submissions and testcases of our project, this is unmanageable if we
>> don't have a quick view of the problem.
>> Is this related to 'test time #max failures before flag' in the testing
>> configuration tab ? I don't find any documentation about this feature.
> Sometimes the time to run a given test can be high due to unexpected
> and unrelated issues (such as machine loads, etc...). The 'test time
> #max failures before flag' variable avoid such peaks by not flagging
> the test as failed if theses peaks are not occurring more than $max
> times in a row.
>> On the other hand, the mean is computed as (from wiki): newMean =
>> (1-alpha)*oldMean + alpha*currentTime. Default value for alpha is 0.3
>> but I don't know where to modify it (I'm only project administrator, not
>> CDash admin). Am I right when saying that all tests in the history are
>> taken into account (of course over-weighted for most recent ones) and
>> this is not an equally weighted mean ?
> The alpha value is hard-coded to 0.3 and cannot be changed for now.
> You are correct that this is not an equally weighted mean but an
> approximated average.
>> Finally, in the testing tab of the project configuration, there is a
>> 'test time standard deviation' tab. But this is not the standard
>> deviation but a multiplier of the standard deviation; Shouldn't it be
>> renamed for clarity.
> Thanks for the feedback. I've logged a bug report:
> http://public.kitware.com/Bug/view.php?id=12310
> Julien

More information about the CDash mailing list