Client Options¶
Running tests¶
Running tests can be done in the following ways:
Using the rotest command:
$ rotest [PATHS]... [OPTIONS]
The command can get every path - either files or directories. Every directory will be recursively visited for finding more files. If no path was given, the current working directory will be selected by default.
Calling the
rotest.main()
function:from rotest import main from rotest.core import TestCase class Case(TestCase): def test(self): pass if __name__ == "__main__": main()
Then, this same file can be ran:
$ python test_file.py [OPTIONS]
Getting Help¶
-
-h
,
--help
¶
Show a help message and exit.
If you’re not sure what you can do, the help options -h
and
--help
are here to help:
$ rotest -h
Run tests in a module or directory.
Usage:
rotest [<path>...] [options]
Options:
-h, --help
Show help message and exit.
--version
Print version information and exit.
-c <path>, --config <path>
Test configuration file path.
-s, --save-state
Enable saving state of resources.
-d <delta-iterations>, --delta <delta-iterations>
Enable run of failed tests only - enter the number of times the
failed tests should be run.
-p <processes>, --processes <processes>
Use multiprocess test runner - specify number of worker
processes to be created.
-o <outputs>, --outputs <outputs>
Output handlers separated by comma.
-f <query>, --filter <query>
Run only tests that match the filter expression,
e.g. 'Tag1* and not Tag13'.
-n <name>, --name <name>
Assign a name for current launch.
-l, --list
Print the tests hierarchy and quit.
-F, --failfast
Stop the run on first failure.
-D, --debug
Enter ipdb debug mode upon any test exception.
-S, --skip-init
Skip initialization and validation of resources.
-r <query>, --resources <query>
Specify resources to request by attributes,
e.g. '-r res1.group=QA,res2.comment=CI'.
Listing and Filtering¶
-
-l
,
--list
¶
Print the tests hierarchy and quit.
-
-f
<query>
,
--filter
<query>
¶ Run only tests that match the filter expression, e.g. “Tag1* and not Tag13”.
Next, you can print a list of all the tests that will be run, using
-l
or --list
options:
$ rotest some_test_file.py -l
CalculatorSuite []
| CasesSuite []
| | PassingCase.test_passing ['BASIC']
| | FailingCase.test_failing ['BASIC']
| | ErrorCase.test_error ['BASIC']
| | SkippedCase.test_skip ['BASIC']
| | SkippedByFilterCase.test_skipped_by_filter ['BASIC']
| | ExpectedFailureCase.test_expected_failure ['BASIC']
| | UnexpectedSuccessCase.test_unexpected_success ['BASIC']
| PassingSuite []
| | PassingCase.test_passing ['BASIC']
| | SuccessFlow ['FLOW']
| | | PassingBlock.test_method
| | | PassingBlock.test_method
| FlowsSuite []
| | FailsAtSetupFlow ['FLOW']
| | | PassingBlock.test_method
| | | FailingBlock.test_method
| | | ErrorBlock.test_method
| | FailsAtTearDownFlow ['FLOW']
| | | PassingBlock.test_method
| | | TooManyLogLinesBlock.test_method
| | | FailingBlock.test_method
| | | ErrorBlock.test_method
| | SuccessFlow ['FLOW']
| | | PassingBlock.test_method
| | | PassingBlock.test_method
You can see the tests hierarchy, as well as the tags each test has. Speaking
about tags, you can apply filters on the tests to be run, or on the shown list
of tests using the -f
or --filter
options:
$ rotest some_test_file.py -f FLOW -l
CalculatorSuite []
| CasesSuite []
| | PassingCase.test_passing ['BASIC']
| | FailingCase.test_failing ['BASIC']
| | ErrorCase.test_error ['BASIC']
| | SkippedCase.test_skip ['BASIC']
| | SkippedByFilterCase.test_skipped_by_filter ['BASIC']
| | ExpectedFailureCase.test_expected_failure ['BASIC']
| | UnexpectedSuccessCase.test_unexpected_success ['BASIC']
| PassingSuite []
| | PassingCase.test_passing ['BASIC']
| | SuccessFlow ['FLOW']
| | | PassingBlock.test_method
| | | PassingBlock.test_method
| FlowsSuite []
| | FailsAtSetupFlow ['FLOW']
| | | PassingBlock.test_method
| | | FailingBlock.test_method
| | | ErrorBlock.test_method
| | FailsAtTearDownFlow ['FLOW']
| | | PassingBlock.test_method
| | | TooManyLogLinesBlock.test_method
| | | FailingBlock.test_method
| | | ErrorBlock.test_method
| | SuccessFlow ['FLOW']
| | | PassingBlock.test_method
| | | PassingBlock.test_method
The output will be colored in a similar way as above.
You can include boolean literals like not
, or
and and
in your
filter, as well as using test names and wildcards (all non-literals are case
insensitive):
$ rotest some_test_file.py -f "basic and not skipped*" -l
CalculatorSuite []
| CasesSuite []
| | PassingCase.test_passing ['BASIC']
| | FailingCase.test_failing ['BASIC']
| | ErrorCase.test_error ['BASIC']
| | SkippedCase.test_skip ['BASIC']
| | SkippedByFilterCase.test_skipped_by_filter ['BASIC']
| | ExpectedFailureCase.test_expected_failure ['BASIC']
| | UnexpectedSuccessCase.test_unexpected_success ['BASIC']
| PassingSuite []
| | PassingCase.test_passing ['BASIC']
| | SuccessFlow ['FLOW']
| | | PassingBlock.test_method
| | | PassingBlock.test_method
| FlowsSuite []
| | FailsAtSetupFlow ['FLOW']
| | | PassingBlock.test_method
| | | FailingBlock.test_method
| | | ErrorBlock.test_method
| | FailsAtTearDownFlow ['FLOW']
| | | PassingBlock.test_method
| | | TooManyLogLinesBlock.test_method
| | | FailingBlock.test_method
| | | ErrorBlock.test_method
| | SuccessFlow ['FLOW']
| | | PassingBlock.test_method
| | | PassingBlock.test_method
Stopping at first failure¶
-
-F
,
--failfast
¶
Stop the run on first failure.
The -F
or --failfast
options can stop execution after
first failure:
$ rotest some_test_file.py --failfast
CalculatorSuite
CasesSuite
PassingCase.test_passing ... OK
FailingCase.test_failing ... FAIL
Traceback (most recent call last):
File "/home/odp/code/rotest/src/rotest/core/case.py", line 310, in test_method_wrapper
test_method(*args, **kwargs)
File "tests/calculator_tests.py", line 34, in test_failing
self.assertEqual(1, 2)
AssertionError: 1 != 2
======================================================================
FAIL: FailingCase.test_failing
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/odp/code/rotest/src/rotest/core/case.py", line 310, in test_method_wrapper
test_method(*args, **kwargs)
File "tests/calculator_tests.py", line 34, in test_failing
self.assertEqual(1, 2)
AssertionError: 1 != 2
Ran 2 tests in 0.205s
FAILED (failures=1)
Debug Mode¶
-
-D
,
--debug
¶
Enter ipdb debug mode upon any test exception.
The -D
or --debug
options can enter debug mode when
exceptions are raised at the top level of the code:
$ rotest some_test_file.py --debug
AnonymousSuite
FailingCase.test ...
Traceback (most recent call last):
File "tests/some_test_file.py", line 11, in test
self.assertEqual(self.calculator.calculate("1+1"), 3)
File "/usr/lib64/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
File "/usr/lib64/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
AssertionError: 2.0 != 3
> tests/some_test_file.py(12)test()
10 def test(self):
11 self.assertEqual(self.calculator.calculate("1+1"), 3)
---> 12
13
14 if __name__ == "__main__":
ipdb> help
Documented commands (type help <topic>):
========================================
EOF c d help longlist pinfo raise tbreak whatis
a cl debug ignore n pinfo2 restart u where
alias clear disable j next pp retry unalias
args commands down jump p psource return unt
b condition enable l pdef q run until
break cont exit list pdoc quit s up
bt continue h ll pfile r step w
Once in the debugging session, you can do any of the following:
- Inspect the situation, by evaluating expressions or using commands that
are supported by
ipdb
. For example: continuing the flow, jumping into a specific line, etc. retry
the action, if it’s a known flaky action and someone’s going to take care of it soon.raise
the exception, and failing the test.
Retrying Tests¶
-
-d
<delta-iterations>
,
--delta
<delta-iterations>
¶ Rerun test a specified amount of times until it passes.
In case you have flaky tests, you can automatically rerun a test until getting
a success result. Use options --delta
or -d
:
$ rotest some_test_file.py --delta 2
AnonymousSuite
FailingCase.test ... FAIL
Traceback (most recent call last):
File "rotest/src/rotest/core/case.py", line 310, in test_method_wrapper
test_method(*args, **kwargs)
File "some_test_file.py", line 11, in test
self.assertEqual(self.calculator.calculate("1+1"), 3)
AssertionError: 2.0 != 3
======================================================================
FAIL: FailingCase.test
----------------------------------------------------------------------
Traceback (most recent call last):
File "rotest/src/rotest/core/case.py", line 310, in test_method_wrapper
test_method(*args, **kwargs)
File "some_test_file.py", line 11, in test
self.assertEqual(self.calculator.calculate("1+1"), 3)
AssertionError: 2.0 != 3
Ran 1 test in 0.122s
FAILED (failures=1)
AnonymousSuite
FailingCase.test ... OK
Ran 1 test in 0.082s
OK
Running Tests in Parallel¶
-
-p
<processes>
,
--processes
<processes>
¶ Spawn specified amount of processes to execute tests.
To optimize the running time of tests, you can use options -p
or
--processes
to run several work processes that can run tests
separately.
Any test have a TIMEOUT
attribute (defaults to 30 minutes), and it will be
enforced only when spawning at least one worker process:
class SomeTest(TestCase):
# Test will stop if it exceeds execution time of an hour,
# only when the number of processes spawned is greater or equal to 1
TIMEOUT = 60 * 60
def test(self):
pass
Specifying Resources to Use¶
-
-r
<query>
,
--resources
<query>
¶ Choose resources based on the given query.
You can run tests with specific resources, using options --resources
or -r
.
The request is of the form:
$ rotest some_test_file.py --resources <query-for-resource-1>,<query-for-resource-2>,...
As an example, let’s suppose we have the following test:
class SomeTest(TestCase):
res1 = Resource1()
res2 = Resource2()
def test(self):
...
You can request resources by their names:
$ rotest some_test_file.py --resources res1=name1,res2=name2
Alternatively, you can make more complex queries:
$ rotest some_test_file.py --resources res1.group.name=QA,res2.comment=nightly
Activating Output Handlers¶
-
-o
<outputs>
,
--outputs
<outputs>
¶
To activate an output handler, use options -o
or --outputs
,
with the output handlers separated using commas:
$ rotest some_test_file.py --outputs excel,logdebug
For more about output handlers, read on Output Handlers.