The performance test interface leverages the script, function, and class-based unit testing interfaces. You can perform qualifications within your performance tests to ensure correct functional behavior while measuring code performance. Also, you can run your performance tests as standard regression tests to ensure that code changes do not break performance tests.
This table indicates what code is measured for the different types of tests.
Type of Test | What Is Measured | What Is Excluded |
---|---|---|
Script-based | Code in each section of the script |
|
Function-based | Code in each test function |
|
Class-based | Code in each method tagged with the Test attribute |
|
Class-based deriving from matlab.perftest.TestCase and
using startMeasuring and stopMeasuring methods | Code between calls to startMeasuring and stopMeasuring in
each method tagged with the Test attribute |
|
You can create two types of time experiments.
A frequentist time experiment collects
a variable number of measurements to achieve a specified margin of
error and confidence level. Use a frequentist time experiment to define
statistical objectives for your measurement samples. Generate this
experiment using the runperf
function or the limitingSamplingError
static
method of the TimeExperiment
class.
A fixed time experiment collects
a fixed number of measurements. Use a fixed time experiment to measure
first-time costs of your code or to take explicit control of your
sample size. Generate this experiment using the withFixedSampleSize
static
method of the TimeExperiment
class.
This table summarizes the differences between the frequentist and fixed time experiments.
Frequentist time experiment | Fixed time experiment | |
---|---|---|
Warm-up measurements | 4 by default, but configurable through TimeExperiment.limitingSamplingError | 0 by default, but configurable through TimeExperiment.withFixedSampleSize |
Number of samples | Between 4 and 32 by default, but configurable through TimeExperiment.limitingSamplingError | Defined during experiment construction |
Relative margin of error | 5% by default, but configurable through TimeExperiment.limitingSamplingError | Not applicable |
Confidence level | 95% by default, but configurable through TimeExperiment.limitingSamplingError | Not applicable |
Framework behavior for invalid test result | Stops measuring a test and moves to the next one | Collects specified number of samples |
If your class-based tests derive from matlab.perftest.TestCase
instead
of matlab.unittest.TestCase
, then you can use the startMeasuring
and stopMeasuring
methods
to define boundaries for performance test measurements. You can use
these boundaries only once within each method that contains the Test
attribute.
If you use these methods, the call to startMeasuring
must
precede the call to stopMeasuring
. If you use these
methods incorrectly in a Test
method and run the
test as a TimeExperiment
, then the frameworks marks
the measurement as invalid. Also, you still can run these performance
tests as unit tests. For more information, see Test Performance Using Classes.
There are two ways to run performance tests:
Use the runperf
function to run
the tests. This function uses a variable number of measurements to
reach a sample mean with a 0.05 relative margin of error within a
0.95 confidence level. It runs the tests four times to warm up the
code and between 4 and 32 times to collect measurements that meet
the statistical objectives.
Generate an explicit test suite using the testsuite
function
or the methods in the TestSuite
class, and then create
and run a time experiment.
Use the withFixedSampleSize
method
of the TimeExperiment
class to construct a time experiment
with a fixed number of measurements. You can specify a fixed number
of warm-up measurements and a fixed number of samples.
Use the limitingSamplingError
method
of the TimeExperiment
class to construct a time experiment
with specified statistical objectives, such as margin of error and
confidence level. Also, you can specify the number of warm-up measurements
and the miminum and maximum number of samples.
You can run your performance tests as regression tests. For more information, see Run Tests for Various Workflows.
In some situations, the MeasurementResult
for
a test result is marked invalid. A test result is marked invalid when
the performance testing framework sets the Valid
property
of the MeasurementResult
to false. This invalidation
occurs if your test fails or is filtered. Also, if your test incorrectly
uses the startMeasuring
and stopMeasuring
methods
of matlab.perftest.TestCase
, then the MeasurementResult
for
that test is marked invalid.
When the performance testing framework encounters an invalid test result, it behaves differently depending on the type of time experiment:
If you create a frequentist time experiment, then the framework stops measuring for that test and moves to the next test.
If you create a fixed time experiment, then the framework continues collecting the specified number of samples.
matlab.perftest.TimeExperiment
| matlab.unittest.measurement.MeasurementResult
| runperf
| testsuite