(0034716)
|
Sean Patrick Santos
|
2013-12-04 22:10
(edited on: 2013-12-04 22:15) |
|
My two cents. There are three cases that I'd like to be able to distinguish between at a glance.
1) Everything checks out. (All tests passed.)
2) Something is definitely broken. (At least one test failed.)
3) Things *seem* OK, but not everything was checked. (Mix of passed and skipped.)
If a test isn't running due to a dependency being missing, I want to be able to easily tell that there's a gap in test coverage. But a failure suggests that something has been discovered to be broken, which is not always the case.
Say I have 3 tests that depend on optional library A, 15 tests that depend on optional library B, and another 15 tests that depend on, say, being able to connect to a particular server, and another 10 that depend on specific hardware.
If I'm working on a system without B, or with a bad internet connection, I don't want the associated tests to fail, because then I have to sift through those failures that are totally meaningless. Nothing is broken and there are no mistakes; those tests just aren't relevant at the moment!
But I also don't want to *silently* remove the tests, and end up forgetting that I need to go back and run the whole test suite before distributing a new version. Or to make a mistake and have the tests removed in a situation where they actually should all be run.
It seems to me like the right solution is to make "Skipped" a new possible status that's neither a pass nor a fail, as this ticket suggests, so that I could look for those tests when I care, but they wouldn't draw attention when I don't. But looking at the notes on ticket 0008466, apparently that's not a simple thing to do.
Alternatively, you could band-aid the situation by making it possible to treat tests that aren't run as if they passed, but still report the status somewhere where it's easy to see. So CTest output might look like so:
Start 1: my_foo
1/1 Test #1: my_foo ...........................***Skipped 0.00 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.01 sec
You could also do the same thing to report expected failures (as opposed to reporting expected failures as normal passed tests, as happens now). Still, counting a skipped test as a pass, just to avoid a spurious message about it failing, is silly.
Either way, this requires some new interface (e.g. a "SKIP_TEST" test property?). That would take care of skipping a test at cmake time. You could also let a test command choose to skip itself at run time, but that relates to whatever happens in ticket 0008466...
|
|