This document should help you with writing and running tests for your pull requests. It is recommended to add a new test if a new functionality is added or if a code that is not covered by any test is updated. Another recommendation is to add your test in a separate commit.
All the tests reside in the tests directory. It has multiple subdirectories which should represent various parts of the OpenSCAP library and its utilities. When you contribute to some part of the OpenSCAP project you should put a test for your contribution into the corresponding subdirectory in the tests directory. Use your best judgement when deciding where to put the test for your pull request and if you are not sure don’t be affraid to ask in the pull request, someone will definitely help you with that.
Note
|
OpenSCAP project uses the CMake buildsystem which has built-in testing support through the CTest testing tool. |
To run a specific test or all tests you first need to compile the OpenSCAP library and then install additional packages required for testing. See the Building OpenSCAP on Linux section in the OpenSCAP Developer Manual for more details.
In this guide we will use an example to describe the process of writing a test
for the OpenSCAP project. Let’s suppose you want to write a new test for
the Script Check Engine (SCE) to test its basic functionality. SCE allows you
to define your own scripts (usually written in Bash or Python) to extend XCCDF
rule checking capabilities. Custom check scripts can be referenced from
an XCCDF rule using <check-content-ref>
element, for example:
<check system="http://open-scap.org/page/SCE"> <check-content-ref href="YOUR_BASH_SCRIPT.sh"/> </check>
In our example, we are testing the SCE module, therefore we will look for its subdirectory in the tests direcotory and we will find it at the following link: tests/sce. We will add our new test into this subdirectory.
This will happen most of the times. As stated above in our example we will place our test into the tests/sce directory.
This might happen if your test covers a part of the OpenSCAP which has no tests
at all. In this case you would need to add a new directory with suitable name
into the tests directory structure and create/update
CMakeLists.txt
files.
To have an example also for this scenario, let’s suppose we want to add the
foo
subdirectory into the tests directory. We need to:
-
Create the
foo
subdirectory in the tests directory. -
Use the add_subdirectory command from the tests/CMakeLists.txt to add the
foo
subdirectory to the CMake buildsystem. -
Create
CMakeLists.txt
inside thetests/foo
directory and use theadd_oscap_test
function (defined in the tests/CMakeLists.txt) to add all your test scripts from thefoo
directory.
Please see the CTest documentation for more details.
When writing tests for OpenSCAP you should use the common test library which is located at tests/test_common.sh.in.
Note
|
The cmake command will generate the tests/test_common.sh file from
the tests/test_common.sh.in adding
configuration specific settings.
|
You will need to source the tests/test_common.sh
in your test scripts to use
the functions which it provides:
#!/usr/bin/env bash
set -o pipefail
. $builddir/tests/test_common.sh
test1
test2
...
Note
|
The $builddir variable contains the path to the top level of the build
tree. It is defined in the tests/CMakeLists.txt
as builddir=${CMAKE_BINARY_DIR} .
|
Always use $OSCAP
variable instead of the plain oscap
in your tests when
calling the oscap
command line tool. This is because tests might be run with
CUSTOM_OSCAP
variable.
You can use $XMLDIFF
in your tests which will call the
tests/xmldiff.pl script.
It is also possible to do XPath queries using $XPATH
variable, the usage is:
$XPATH FILE 'QUERY'
It is always good to set pipefail
option for all your test scripts so you
accidentally don’t lose non-zero exit statuses of commands in a pipeline:
set -o pipefail
The next option you can consider is errexit
which exits your script
immediately if a command exits with a non-zero status. You might want to use
this option if tests are somehow dependent and it doesn’t make sense to continue
testing after one test fails.
set -e
Also consider splitting the tests up into several separate bash scripts if you test many things at once; long running "all.sh" test should be avoided if possible, as when these fail it is harder to debug than having more, smaller tests. However, make sure these independent tests don’t have hidden dependencies: often we’ll run the test suite in parallel, meaning in-order execution of different test scripts cannot be ensured. (However, order order inside the test script is guaranteed per bash’s rules).
Lastly, make sure your tests are architecture, distribution, and operating system independent. Try to make your tests dependent on local file contents, not system file contents, and make them independent of listening ports and installed software as much as possible.
This function does nothing. Logging is done by the CTest and the log from
testing can be found at build/Testing/Temporary/LastTest.log
.
The function is responsible for executing a test script file or a function and logging its result into the log file.
test_run "DESCRIPTION" TEST_FUNCTION|$srcdir/TEST_SCRIPT_FILE ARG [ARG...]
Note
|
The $srcdir variable contains the path to the directory with the test
script. It is defined in the tests/CMakeLists.txt
as srcdir=${CMAKE_CURRENT_SOURCE_DIR} . The reason is backward compatibility
as originally tests were executed by the GNU automake.
|
The test_run
function reports the following results into the log file:
-
PASS when script/function returns 0,
-
FAIL when script/function returns 1,
-
SKIP when script/function returns 255,
-
WARN when script/function returns none of the above exit statuses.
The result of every test executed by the test_run
function will be reported
in the log file in a following way:
TEST: DESCRIPTION
<test stdout + stderr output>
RESULT: PASS/FAIL/SKIP/WARN
The function is responsible for cleaning-up the testing environment. You can call it without arguments or with one argument — a script/function which will do additional clean-up tasks.
test_exit [CLEAN_SCRIPT|CLEAN_FUNCTION]
Checks if requirements are in the $PATH
, use it as follows:
require 'program' || return 255
Verifies that there is the COUNT
number of results of selected OVAL TYPE
in
a RESULTS_FILE
:
verify_results TYPE CONTENT_FILE RESULTS_FILE COUNT
Note
|
This function expects that the OVAL TYPE is numbered from 1 to COUNT
in the RESULTS_FILE .
|
Does an XPath query to a file specified in the $result
variable and checks if
number of results matches with an expected number specified as an argument:
result="relative_path_to_file"
assert_exists EXPECTED_NUMBER_OF_RESULTS XPATH_QUERY_STRING
For example, let’s say you want to check that in the results.xml
file the
result of the rule xccdf_com.example.www_rule_test
is fail:
result="./results.xml"
my_rule_="xccdf_com.example.www_rule_test"
assert_exists 1 "//rule-result[@idref=\"$my_rule\"]/result[text()=\"fail\"]"
Now, as we know where a new test should go and what functions and capabilities are provided by the common test library, we can add test files which will contain test scripts and content required for testing.
To sum up, we are adding a tests to check the basic functionality of the Script Check Engine (SCE) and we have decided that the test will go into the tests/sce directory.
We will add the tests/sce/test_sce.sh
script which will contain our test and
tests/sce/sce_xccdf.xml, an XML file with
XCCDF rules which are referencing various check scripts (grep the
check-content-ref
element to see the referenced files). All the referenced
check script files are set to always pass and the
tests/sce/test_sce.sh script will perform
evaluation of the tests/sce/sce_xccdf.xml
XCCDF document file and it will check that all rule results are pass
.
You need to plug your test into the test library so it will be run automatically
everytime make test
is run. To do this, you need to add your test script
into the CMakeLists.txt
. The CMakeLists.txt
which you need to modify is
located in the same directory as your test script.
We will demonstrate this on our example with the SCE test. We have prepared our
test script, the XML document file with custom rules and various check scripts
for testing. We placed all our test files into the
tests/sce directory. Now we will modify the
tests/sce/CMakeLists.txt and we will add
our test script file using the add_oscap_test
function which will make sure
that our test will be executed by the make test
:
if(ENABLE_SCE) ... *add_oscap_test("test_sce.sh")* ... endif()
To run your new test you first need to compile the OpenSCAP library. See the
Building OpenSCAP on Linux section in the OpenSCAP Developer Manual
for more details.
Also you don’t need to run all the tests using make test
, you can run only
the specific test(s). To do so, you need to be in the build directory and
run ctest -R
from there, for example:
$ cd build/
$ ctest -R sce/test_sce.sh
$ less Testing/Temporary/LastTest.log
Results from testing will be printed on the stdout and detailed log file with
your test results can be found in the Testing/Temporary/LastTest.log
file.
The MITRE tests (in tests/mitre
) are functionality tests for several
key probes.
There are several reasons for putting these tests behind an ENABLE_MITRE
CMake flag: they cannot be run in parallel as they race to create and
delete temporary support files; they require port 25 to be open and
listening (mitre/test_mitre_linux_probes.sh
); there are outdated assumptions
which need to be gated around operating system version checks; and a number of
probes had to be disabled because they’re not supported (e.g., SQL and LDAP
checks, etc.).
To run only the MITRE tests, please make sure you’ve installed and started a
SMTP server that is listening on 127.0.0.1:25
, and that you’re running on a
RPM-based distribution. Then:
$ cd build/ $ cmake -DENABLE_MITRE=TRUE .. $ make $ ctest --output-on-failure -R mitre
To run the containerized MITRE tests (which installs a SMTP server and tests it):
$ cd openscap/ $ docker build --tag openscap_mitre_tests:latest -f Dockerfiles/mitre_tests . $ docker run openscap_mitre_tests:latest