Test Monitors

Purpose

Monitors are custom output handlers that are meant to give further validation of tests in runtime, or save extra information about the tests.

The features of monitors:

  • They can be applied or dropped as easily as adding a new output handler to the list.
  • They enable extending sets of tests with additional validations without altering their code.
  • They can run in the background (in another thread).

A classic example is monitoring CPU usage during tests, or a resource’s log file.

Writing A Monitor

There are two monitor classes which you can inherit from:

class rotest.core.result.monitor.monitor.AbstractMonitor(*args, **kwargs)

Abstract monitor class.

CYCLE

number – sleep time in seconds between monitor runs.

SINGLE_FAILURE

bool – whether to continue running the monitor after it had failed or not.

Note

When running in multiprocess, regular output handlers will be used by the main process, and the monitors will be run by each worker, since they use tests’ attributes (resources, for example) that aren’t available in the main process.

fail_test(test, message)

Add a monitor failure to the test without stopping it.

Parameters:
  • test (object) – test item instance.
  • message (str) – failure message.
run_monitor(test)

The monitor main procedure.

class rotest.core.result.monitor.monitor.AbstractResourceMonitor(*args, **kwargs)

Abstract cyclic monitor that depends on a resource to run.

This class extends the AbstractMonitor behavior and also waits for the resource to be ready for work before calling run_monitor.

RESOURCE_NAME

str – expected field name of the resource in the test.

There are two types of monitors:

  • Monitors that only react to test events, e.g. taking a screen-shot on error.

    Since monitors inherit from AbstractResultHandler, you can react to any test event by overriding the appropriate method.

    See Available Events for a list of events.

    Each of those event methods gets the test instance as the first parameter, through which you can access its fields (test.<resource>, test.config, test.work_dir, etc.)

  • Monitors that run in the background and periodically save data or run a validation, like the above suggested CPU usage monitor.

    To create such a monitor, simply override the class field CYCLE and the method run_monitor.

    Again, the run_monitor method (which is called periodically after setUp and until tearDown) gets the test instance as a parameter, through which you can get what you need.

    Note that the monitor thread is created only for upper tests, i.e. TestCases or topmost TestFlows.

    Remember that you might need to use some synchronization mechanism since you’re running in a different thread yet using the test’s own resources.

Use the method fail_test to add monitor failures to your tests in the background, e.g.

self.fail_test(test, "Reached 100% CPU usage")

Note that when using TestBlocks and TestFlows, you might want to limit your monitor events to only be applied on main tests and not sub-components (run_monitor already behaves that way by default). For your convenience, you can use the following decorators on the overridden event methods to limit their activity:

rotest.core.result.monitor.monitor.skip_if_case(func)

Avoid running the decorated method if the test is a TestCase.

rotest.core.result.monitor.monitor.skip_if_flow(func)

Avoid running the decorated method if the test is a TestFlow.

rotest.core.result.monitor.monitor.skip_if_block(func)

Avoid running the decorated method if the test is a TestBlock.

rotest.core.result.monitor.monitor.skip_if_not_main(func)

Avoid running the method if the test is a TestBlock or sub-flow.

rotest.core.result.monitor.monitor.require_attr(resource_type)

Avoid running the decorated method if the test lacks an attribute.

Parameters:resource_type (str) – name of the attribute to search.