[CMake] Building and executing tests when building updated libraries

Robert Dailey rcdailey at gmail.com
Tue Feb 21 14:51:02 EST 2012


On Tue, Feb 21, 2012 at 1:15 PM, David Cole <david.cole at kitware.com> wrote:

> On Tue, Feb 21, 2012 at 1:51 PM, Robert Dailey <rcdailey at gmail.com> wrote:
> > On Tue, Feb 21, 2012 at 12:37 PM, David Cole <david.cole at kitware.com>
> wrote:
> >>
> >> On Tue, Feb 21, 2012 at 1:27 PM, Robert Dailey <rcdailey at gmail.com>
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I'm using Visual Studio as my generator for my CMake projects. As of
> >>> right now, I make my tests depend on the libraries they test. So for
> >>> example, tests named:
> >>>
> >>> test_thingA
> >>> test_thingB
> >>>
> >>> will all depend on library:
> >>>
> >>> libfoo.lib
> >>>
> >>> When I build target "libfoo" in visual studio, it would be nice to have
> >>> all dependent tests build as well, and have them each execute.
> >>>
> >>> The goal for all of this is to make it as convenient as possible for
> >>> developers on my team to RUN TESTS on their code before they submit to
> >>> version control. I want to make it automated, so when they rebuild the
> >>> library, the testing automatically happens. I'd also obviously create
> an
> >>> option in cmake cache to turn this automation off should it become too
> >>> annoying.
> >>>
> >>> If this isn't a good idea, can someone recommend a good workflow for
> >>> running tests locally prior to checking in source code?
> >>>
> >>> ---------
> >>> Robert Dailey
> >>>
> >>> --
> >>>
> >>> Powered by www.kitware.com
> >>>
> >>> Visit other Kitware open-source projects at
> >>> http://www.kitware.com/opensource/opensource.html
> >>>
> >>> Please keep messages on-topic and check the CMake FAQ at:
> >>> http://www.cmake.org/Wiki/CMake_FAQ
> >>>
> >>> Follow this link to subscribe/unsubscribe:
> >>> http://www.cmake.org/mailman/listinfo/cmake
> >>
> >>
> >>
> >> If you're using add_test in your CMakeLists files, then the perfect way
> to
> >> prove that the tests all work on a developer's machine is for him or
> her to
> >> run:
> >>
> >>   ctest -D Experimental
> >>
> >> after making local mods, and before pushing the changes to your source
> >> control system.
> >>
> >> That will configure, build all and run all the tests. And submit the
> >> results to your CDash server so that is public evidence that he
> actually did
> >> run the tests, and hopefully that they all passed on his machine at
> least.
> >>
> >> You can also restrict the set of tests that run using -R or -I or -L on
> >> the ctest command line, although, you should strive to have your test
> suite
> >> be brief enough that it's not painful for folks to run the full test
> suite
> >> prior to checkin.
> >
> >
> > I think this is a reasonable idea for small projects, but in general I
> > disagree with running all tests.
> >
> > There are hundreds of projects (probably 150) and hundreds more of tests
> > (probably 10 tests per project). In general agile methodology, it only
> makes
> > sense to unit test those components which have changed. Unit testing a
> > dependent component that did not have a source code change will not be
> > needed or beneficial.
> >
> > All of these tests can take hours to run, which isn't unacceptable
> because
> > it's a full test suite. Only the build server kicks off a build and runs
> the
> > FULL test suite (thus running ctest -D Experimental as you have
> suggested).
> > Developers just do an intermediate check by unit testing only the parts
> of
> > the code base that have changed. This is essential for practices like
> > continuous integration.
> >
> > Ideally the pipeline goes like this:
> >
> > Programmer makes a change to a certain number of libraries
> > Programmer runs the relevant tests (or all) for each of the libraries
> that
> > were changed.
> > Once those tests have passed, the developer submits the source code to
> > version control
> > The build server is then instructed to run a full build and test of the
> > entire code base for each checkin.
> > The build server can then run any integration tests that are configured
> (not
> > sure how these would be setup in CMake - probably again as tests, but not
> > specific to only a single project)
> > Build is considered "complete" at this point.
> >
> > Seems like there would be no choice but to run them individually in this
> > case, since CMake really shines only in steps after #3
>
> Incremental testing is something we've talked about over the years,
> but there's no concept of "what's changed, what needs to run" when
> ctest runs at the moment. Communicating that information from the
> build to ctest, or making testing always part of the build are the two
> approaches we've considered. Nothing exists yet, though, so it's all
> in the future, yet to come.
>
> Sorry to hear you disagree about running all the tests. I'll make one
> more point and then shut up about it: the larger the project, the more
> you need to run all the tests when changes are made. Unless the
> developers all have a very good understanding of what parts need to be
> tested when they make a change, they should run all the tests. If a
> system is very large, then developers are more likely to have
> imperfect understandings about the system... when that's the case, if
> there is any doubt at all about what dependencies exist, then all the
> tests should be run to verify a change.
>
> Until incremental testing is available, I'd say your best bet is to
> run all the tests.


I apologize if I sounded like your suggestion wasn't meaningful or useful.
I would much rather prefer to do it how you suggest (running all tests),
but this leaves me with some concerns:


   1. If the developer is running all unit tests on their local machine,
   what is the purpose of then running them on the server? If the server does
   it again in response to the commit, wouldn't that be considered redundant?
   2. Let's assume that the time it takes to run all tests takes about 1
   hour. Not only does this slow down productivity, but it also makes
   practices like continuous integration impossible to perform, since a lot of
   people can commit work in that 1 hour window, in which case you'd have to
   run the tests again after updating. It's a recursive issue.

How would you address the concerns I have noted above?

My tests are labeled in such a way that they are easy to spot and work with
in the solution explorer. For example:

projectA
projectA_test_iostreams
projectA_test_fileio
projectA_test_graphics
projectA_test_input

In my example above, target named "projectA" has 4 unit tests. Each test
can be responsible for 1 or more translation units (there is no strict rule
here). If I change the way files are loaded by library "projectA", then I
would run the fileio test. However, in this case it's really easy for the
developer to spot the tests for that project and run all of them if they
are unsure.

Would you also mind commenting on this structure? It seems to ease the
concern you mentioned about people not always being able to know which
tests to run.

Thanks for your feedback. Keep in mind that not only am I covering general
testing principles, but I also want to know how to best apply them to the
tools (CMake, CTest). This is where your expertise becomes valuable to me :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.cmake.org/pipermail/cmake/attachments/20120221/f1bc94ee/attachment-0001.htm>


More information about the CMake mailing list