[Cdash] [Insight-developers] Dashboard: Normalizing TIMEOUTS

Luis Ibanez luis.ibanez at kitware.com
Tue Jan 19 00:33:58 UTC 2010


Bill,

Good point,

I was hoping that by basing the measure
in a test that we use for calibration, then
we could account for the combination of
(processor+OS+compiler+flags).

E.g. if we use as a reference, the time that
it takes to run a Median filter in 100^3 image,
and then we scale the timeout of all the tests
relative to this one.

All dashboard builds will run this tests first,
and then will use its computation time as a
reference.

We could then say that the BSplineDeformable
registration9 test is expected to take 257 times
longer than the reference (the median) test....

and so on...

We will have to weight how much a given test
depends on pure CPU power and how much
on IO operations though... and calibrate for
both...

It start sounding like a great project for a
Summer internship     :-)


    Luis


--------------------------------------------------------
On Mon, Jan 18, 2010 at 11:55 AM, Bill Lorensen <bill.lorensen at gmail.com> wrote:
> Luis,
>
> It would need to be broken down by build type. For example a debug
> build on some platforms can be 10-30 slower that a release build.
>
> Bill
>
> On Mon, Jan 18, 2010 at 11:32 AM, Luis Ibanez <luis.ibanez at kitware.com> wrote:
>> As you may have noticed, the standard practice
>> of using a single TIMEOUT number for all the
>> ~1,700 test in ITK brings up the challenge of
>> defining what a good timeout value is for each
>> machine (and configuration: eg. Release/Debug).
>>
>> The following proposal was raised in the past,
>> but we have not acted upon:
>>
>> 1) Add to ITK one or two test that can be considered
>>    a good benchmark for:
>>
>>        a) computation power
>>        b) input / output speed
>>
>> 2) Run those tests and use their timings as a
>>     base value that characterize this machine.
>>
>> 3)  Define timeout for all tests that are based
>>     on the values found in (2), multiplied by
>>     a factor.
>>
>>
>> Let's say that the computation benchmark takes
>> 2 seconds to run in the machine  foobar.kitware,
>> then we can tell that the DiffeomorphicDemons
>> registration test in the same machine should take
>>
>>           153 x (time of benchmark1 )
>>
>> (where the number "153" is a factor that we
>> will have to estimate for each test).
>>
>> CDash already does a similar thing with the
>> historical record of the computation time that
>> it takes to run every test on a given machine,
>> although this is done on the CDash server,
>> and therefore it happens too late to be used
>> as a TIMEOUT mark.
>>
>> An interesting option as well, could be for
>> a machine to get access to the historical
>> record that CDash has computed, and then
>> use those values as a base for computing
>> TIMEOUT at the moment of running ctest.
>>
>>
>>   What do people think of these options ?
>>
>>
>>         Luis
>> _______________________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Kitware offers ITK Training Courses, for more information visit:
>> http://kitware.com/products/protraining.html
>>
>> Please keep messages on-topic and check the ITK FAQ at:
>> http://www.itk.org/Wiki/ITK_FAQ
>>
>> Follow this link to subscribe/unsubscribe:
>> http://www.itk.org/mailman/listinfo/insight-developers
>>
>



More information about the CDash mailing list