View Issue Details Jump to Notes ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0013825CMakeCTestpublic2012-12-31 14:362016-06-10 14:31
Reportermwoehlke 
Assigned ToKitware Robot 
PrioritynormalSeverityminorReproducibilityalways
StatusclosedResolutionmoved 
PlatformOSOS Version
Product VersionCMake 2.8.10 
Target VersionFixed in Version 
Summary0013825: ctest shouldn't report skipped tests as failed
DescriptionWhen CTest skips some tests, e.g. because their REQUIRED_FILES are missing, the tests contribute to the pass/fail statistics and are reported as failing.

I would rather see output like:

50% tests passed, 2 tests failed out of 4 (17 tests were not run)

...and either only list tests that actually failed, or list skipped tests in a separate section

This could be an option, if not suitable as default behavior (or maybe controlled by a global property in the CTestTestfile.cmake).
TagsNo tags attached.
Attached Files

 Relationships
related to 0008466closedRolf Eike Beer Provide finer control than pass/fail for a test program 
has duplicate 0015431closed Skipped tests (SKIP_RETURN_CODE) cause non-zero exit code 

  Notes
(0034716)
Sean Patrick Santos (reporter)
2013-12-04 22:10
edited on: 2013-12-04 22:15

My two cents. There are three cases that I'd like to be able to distinguish between at a glance.

1) Everything checks out. (All tests passed.)
2) Something is definitely broken. (At least one test failed.)
3) Things *seem* OK, but not everything was checked. (Mix of passed and skipped.)

If a test isn't running due to a dependency being missing, I want to be able to easily tell that there's a gap in test coverage. But a failure suggests that something has been discovered to be broken, which is not always the case.

Say I have 3 tests that depend on optional library A, 15 tests that depend on optional library B, and another 15 tests that depend on, say, being able to connect to a particular server, and another 10 that depend on specific hardware.

If I'm working on a system without B, or with a bad internet connection, I don't want the associated tests to fail, because then I have to sift through those failures that are totally meaningless. Nothing is broken and there are no mistakes; those tests just aren't relevant at the moment!

But I also don't want to *silently* remove the tests, and end up forgetting that I need to go back and run the whole test suite before distributing a new version. Or to make a mistake and have the tests removed in a situation where they actually should all be run.

It seems to me like the right solution is to make "Skipped" a new possible status that's neither a pass nor a fail, as this ticket suggests, so that I could look for those tests when I care, but they wouldn't draw attention when I don't. But looking at the notes on ticket 0008466, apparently that's not a simple thing to do.

Alternatively, you could band-aid the situation by making it possible to treat tests that aren't run as if they passed, but still report the status somewhere where it's easy to see. So CTest output might look like so:

    Start 1: my_foo
1/1 Test #1: my_foo ...........................***Skipped 0.00 sec

100% tests passed, 0 tests failed out of 1

Total Test time (real) = 0.01 sec


You could also do the same thing to report expected failures (as opposed to reporting expected failures as normal passed tests, as happens now). Still, counting a skipped test as a pass, just to avoid a spurious message about it failing, is silly.

Either way, this requires some new interface (e.g. a "SKIP_TEST" test property?). That would take care of skipping a test at cmake time. You could also let a test command choose to skip itself at run time, but that relates to whatever happens in ticket 0008466...

(0038148)
Anna (reporter)
2015-03-05 11:36

Hi, it would be great to have the option to mark skipped tests as passed.

We have some tests that we know should be skipped when testing in the cloud (for example on Travis CI due to unset DISPLAY). But we don't want to get rid of them alltogether, because when used on our local computers they test exactly what we want them to test.

Treating skipped test as failure results in unnecessary need to dig into test logs each time (only to see that some tests were skipped).
(0042189)
Kitware Robot (administrator)
2016-06-10 14:28

Resolving issue as `moved`.

This issue tracker is no longer used. Further discussion of this issue may take place in the current CMake Issues page linked in the banner at the top of this page.

 Issue History
Date Modified Username Field Change
2012-12-31 14:36 mwoehlke New Issue
2012-12-31 15:18 David Cole Relationship added related to 0008466
2013-12-04 22:10 Sean Patrick Santos Note Added: 0034716
2013-12-04 22:15 Sean Patrick Santos Note Edited: 0034716
2015-03-05 11:36 Anna Note Added: 0038148
2016-02-16 10:32 Brad King Relationship added has duplicate 0015431
2016-06-10 14:28 Kitware Robot Note Added: 0042189
2016-06-10 14:28 Kitware Robot Status new => resolved
2016-06-10 14:28 Kitware Robot Resolution open => moved
2016-06-10 14:28 Kitware Robot Assigned To => Kitware Robot
2016-06-10 14:31 Kitware Robot Status resolved => closed


Copyright © 2000 - 2018 MantisBT Team