Getting started with test-r
test-r
is a testing framework for Rust which is almost a drop-in replacement for built-in tests, but enables several advanced features such as dependency injection, dynamic test generation, custom tags, inline customization of the test execution and more.
By replicating the built-in test framework's command line interface, test-r
tests work seamlessly with IDEs like Visual Studio Code, IntelliJ IDEA, Zed, and others. test-r
also implements many unstable features of the built-in test framework, such as customizable test output, reporting and ensuring execution time, shuffling test execution and running #[bench]
benchmarks.
To start using test-r
, add it to the dev-dependencies
section of your Cargo.toml
:
[dev-dependencies]
test-r = "1"
There are three additional steps to take when using test-r
in place of the built-in tests:
- Disabling the built-in test harness for every build target where
test-r
will be used - Enabling the
test-r
test harness by including its main function in every build target - Import
test-r
's customtest
attribute where#[test]
is used
This is explained in details on the Defining tests page, but the example below demonstrates how to set up a simple crate to run tests with test-r
.
Example
The following Cargo.toml
file sets up a simple library crate with test-r
:
[package]
name = "test-r-demo"
version = "0.1.0"
edition = "2021"
[lib]
harness = false # Disable the built-in test harness
[dev-dependencies]
test-r = "1"
And a simple src/lib.rs
file defining a single public function and a test for it:
#![allow(unused)] fn main() { #[cfg(test)] test_r::enable!(); // Enabling test-r's test harness (once per build target) pub fn lib_function() -> u64 { println!("lib_function called"); 11 } #[cfg(test)] mod tests { use test_r::test; // Replacing the built-in #[test] attribute use super::*; #[test] fn test_lib_function() { assert_eq!(lib_function(), 11); } } }
Optional crate features
The test-r
test framework with the default set of enabled features supports running both sync and async tests, using Tokio as the async runtime.
It is possible to turn off the async support by disabling the tokio
feature:
[dev-dependencies]
test-r = { version = "1", default-features = false }
Real-world usage
This section lists known projects that use test-r
:
- Golem Cloud uses
test-r
for all its unit and integration tests. (GitHub)
What is not supported?
The following features are not supported by test-r
:
- Running doctests
- Output capturing cannot be used together with parallel execution AND dependency injection. Any two of these three features can be chosen, but not all three at the same time.
Acknowledgements
Most of test-r
's features were inspired by working with test frameworks in other languages, especially the ZIO Test framework for Scala. The idea of replicating the built-in harness' command line interface came from the libtest-mimic crate. For some features that replicate built-in functionality, parts of the original libtest source code have been reused.
Core features
This chapter covers the core features of test-r
:
- Defining tests using the
#[test]
attribute - Running tests with the usual command line options
- Customizing the test output
Defining tests
Enabling the test-r harness
Writing tests with test-r
is very similar to writing tests with the built-in test framework, but there are a few differences.
Disabling the built-in test harness
First, for every build target where test-r
is going to be used, the built-in test harness must be disabled.
This is done by putting harness = false
in build target's section in Cargo.toml
:
[lib]
harness = false
[[bin]]
harness = false
[[test]]
name = "integ-test-1"
harness = false
[[test]]
name = "integ-test-2"
harness = false
# ...
Mixing test-r and the built-in test harness
It is recommended to turn off running tests completely in the rest of the targets. For example if the crate produces both a library and an executable, and all the tests are in the library part, then put test = false
in the [[bin]]
section:
[[bin]]
test = false
[lib]
harness = false
Without this, cargo test
will run all the test harnesses including the one where the built-in harness is not disabled ([[bin]]
in this case), which may fail on some unsupported command line arguments that the test-r
harness accepts.
If the intention is to use both test-r
and the built-in test harness in the same crate, that's possible, but be careful with the command line arguments passed to cargo test
as some of them may be only supported by the unstable version of the built-in test framework.
Enabling the test-r harness
For every target where the built-in harness was disabled (with harness = false
), we need to install test-r
's test runner instead. In other words, if the compilation is in test
mode, we have to define a main
function that runs the test-r
test runner.
This can be done by adding the following macro invocation at the root of the given build target:
#![allow(unused)] fn main() { #[cfg(test)] test_r::enable!(); }
- For
[lib]
targets, this should be insrc/lib.rs
(or whatever crate root is specified) - For
[[bin]]
targets, this should be in thesrc/main.rs
,src/bin/*.rs
files or the one explicitly set in the crate manifest, for each binary - For
[[test]]
targets, this should be in thetests/*.rs
files for each test
Writing tests
Writing tests is done exactly the same way as with the built-in test framework, but with using test-r
's #[test]
attribute instead of the built-in one. We recommend importing the test attribute with use test_r::test;
so the actual test definitions look identical to the built-in ones, but it is not mandatory.
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use test_r::test; #[test] fn test_lib_function() { assert_eq!(lib_function(), 11); } } }
Within the test function itself any assertion macros from the standard library or any of the third-party assertion crates can be used. (All panics are caught and reported as test failures.)
Writing async tests
The same #[test]
attribute can be used for async tests as well. The test runner will automatically detect if the test function is async and run it accordingly.
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use test_r::test; #[test] async fn test_async_function() { assert_eq!(async_lib_function().await, 11); } } }
Support for async tests requires the tokio
feature, which is enabled by default.
test-r
runs async tests compared to how #[tokio::test]
does. While tokio's test attribute spawns a new current-thread (by default) Tokio runtime for each test, test-r
uses a single multi-threaded runtime to run all the tests. This is intentional, to allow shared dependencies that in some way depend on the runtime itself.
Tests returning Result
Tests in test-r
can have a Result<_, _>
return type. This makes it easier to chain multiple functions within the test that can return with an Err
, no need to unwrap
each. A test that returns a Result::Err
will be marked as failed just like as if it had panicked.
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use test_r::test; #[test] fn test_lib_function() -> Result<(), Box<dyn std::error::Error>> { let result = lib_function()?; assert_eq!(result, 11); Ok(()) } } }
Ignoring tests
The standard #[ignore]
attribute can be used to mark a test as ignored.
#![allow(unused)] fn main() { #[test] #[ignore] fn ignored_test() { assert!(false); } }
Ignored tests can be run with the --include-ignored
or --ignored
flags, as explained in the running tests page.
Testing for panics
The #[should_panic]
attribute can be used to mark a test as expected to panic. The test will pass if it panics, and fail if it doesn't.
#![allow(unused)] fn main() { #[test] #[should_panic] fn panicking_test() { panic!("This test is expected to panic"); } }
Optionally the expected
argument can be used to only accept panics containing a specific message:
#![allow(unused)] fn main() { #[test] #[should_panic(expected = "expected to panic")] fn panicking_test() { panic!("This test is expected to panic"); } }
Running tests
test-r
replicates the command line interface of the built-in test harness, so every integration (scripts, IDE support, etc) should work just like without using test-r
.
This includes some of the unstable flags too, test-r
let's use them without the need to enable the unstable features in the compiler.
Cargo test parameters vs test-r parameters
The cargo test
command takes some of its own options, a test name, and a list of arguments passed to the test harness itself:```
Usage: cargo test [OPTIONS] [TESTNAME] [-- [ARGS]...]
The paramters passed in OPTIONS
select which test targets to build and run. See the official documentation for more details.
TESTNAME
is an optional parameter which selects which tests to run in each selected test target. How exactly it is interpreted depends on other options passed in the ARGS
part.
Choose what to run
Matching on test names
cargo test hello
executes all tests that have the hello
substring in their fully qualified name (module path + function name).
cargo test hello -- --exact
will only run the test that has the exact fully qualified name hello
, which in this case means a function named hello
in the root module.
There is a special syntax to match on tags, see the tags chapter for more details.
Ignored tests
Tests marked with the #[ignore]
attribute are not run by default. To run them, use the --include-ignored
flag.
It is also possible to run only the ignored tests with the --ignored
flag.
Tests expecting panic
Tests using the #[should_panic]
attribute are run by default, but can be skipped with the --exclude-should-panic
flag.
Tests vs benchmarks
The framework supports not only tests (defined with #[test]
), but also benchmarks (defined with #[bench]
). By default, the test runner executes both. It is possible to only run tests or benches with the --test
and --bench
flags.
Skipping some tests
The --skip
option can be used to skip some tests (just like if they were marked with #[ignore]
). It can be used multiple times to mark multiple tests to skip.
Parallelism
By default, the test runner uses as many threads as there are logical cores on the machine. This can be changed with the --test-threads
flag.
cargo test -- --test-threads=1
Note that parallelism can be also controlled on the code level per test suite with the #[sequential]
attribute. See the per-test configuration chapter for more details.
Shuffle
The test runner executes tests in definition order. To shuffle the order, use the --shuffle
flag. To have a deterministic, but shuffled order, use the --shuffle-seed
providing a numeric seed.
Listing tests
It is possible to just list all the available tests, without executing anything with the --list command:
cargo test -- --list
Test output
There are various options controlling the output of the test runner. See the test output chapter for more details.
Debugging
Output capturing is implemented by forking one or more child processes and attaching to their standard output and error channels. This means that attaching a debugger to the parent process will not work as expected. When using a debugger, always pass the --nocapture
flag to the test runner to disable output capturing, which guarantees that all the tests are executed in the single root process.
Test output
The default setting of test-r
is to use the pretty format and capture test outputs.
Output format
There are four supported output formats in test-r
, which can be selected with the --format
flag:
pretty
(default) - human-readable output showing the progress and the final results in a verbose wayterse
- human-readable output, only emitting a single character for each test during the test runjson
- emits JSON messages during the test run, useful for integration with other tools like IDEsjunit
- writes a JUnit XML test report, useful for generating browsable test reports
When using the pretty
(default) mode, the --color
flag can be used to control whether the output should use colors or not:
auto
(default) - colors are used if the terminal supports themalways
- always use colorsnever
- do not use colors
Capturing the test output
When output capturing is enabled, lines written to either the standard output or standard error channels are not shown immediately as the test runs. Instead, they are only shown if the test fails. This allows nicer visual tracking of the test progress and results.
The following options control this behavior:
--nocapture
- disables output capturing, showing the output of each test as it runs--show-output
- shows the output of all tests after they finish, regardless of whether they passed or failed
Note that this global setting of output capturing can be overwritten on a per-test basis using the #[always_capture]
and #[never_capture]
attributes, as explained in the per-test configuration chapter.
--nocapture
flag is not present, the test runner will emit a warning and fall back to single threaded execution.
Measuring and ensuring execution time
By default test-r
follows the built-in test harness behavior and does not report test execution times. This can be changed by passing the --report-time
flag. The --ensure-time
flag not only reports these per-test execution times, but fails the test run if they exceed a pre-configured value. Learn more about this in The Rust Unstable Book.
Note that test-r
provides a nicer way to fail long running tests (but only if the tokio
feature is enabled) using the #[timeout(ms)]
attribute, as explained in the per-test configuration chapter.
Saving the output to a log file
The test output can be saved into a log file using the --logfile <path>
flag. Because of the issue described in the Rust issue tracker, the test runner cannot directly use the provided path as other test harnesses would overwrite it. Instead, test-r
interprets the provided path as a template, and appends a random UUID to its file name part for each generated log file. This allows saving multiple JUnit test reports, for example, into a single directory, where a test browser can pick them up from.
Advanced features
This chapter covers the advanced features of test-r
, which are either not available at all using the built-in test harness,
or at least being unstable.
- Dependency injection allows sharing dependencies between tests.
- Tags allow grouping tests and running only a subset of them.
- Benches are used to measure the performance of functions.
- Per-test configuration allows customizing the test execution from the code, instead of using command line options.
- Flaky tests can be either retried, or executed multiple times to verify they aren't flaky
- Dynamic test generation allows creating new tests from code
Dependency injection
Tests can share dependencies in test-r
. This is especially useful for integration tests where setting up the integration environment is expensive.
Using shared dependencies
To use a shared dependency from a test, we simply need to add a reference parameter to the test function:
#![allow(unused)] fn main() { use test_r::test; struct SharedDependency { value: i32, } struct OtherDependency { value: i32, } #[test] fn test1(shared: &SharedDependency) { assert_eq!(shared.value, 42); } #[test] async fn test2(shared: &SharedDependency, other: &OtherDependency) { assert_eq!(shared.value, other.value); } }
The name of the parameters does not matter - test dependencies are indexed by their type. If a test needs multiple instances of the same type, a newtype wrapper can be used to distinguish them.
Providing shared dependencies
Shared dependencies need to be provided for each test suite. A test suite in test-r
is the enclosing module where the test functions are defined. It is possible to provide different values for the same dependency in different suites, but it is also possible to "import" provided dependencies from an outer suite. This flexibility allows for a wide range of uses cases, from defining singleton dependencies for a whole crate to detailed customization for specific tests.
Test dependencies are provided by constructor functions annotated with #[test_dep]
. The constructor function can be sync or async (if the tokio
feature is enabled):
#![allow(unused)] fn main() { use test_r::test_dep; #[test_dep] async fn shared_dependency() -> SharedDependency { SharedDependency { value: 42 } } #[test_dep] fn other_dependency() -> OtherDependency { OtherDependency { value: 42 } } }
Whether the dependency was created by a sync or async function does not matter - they can be used in both sync and async tests.
Using dependencies provided for an outer test suite
As explained above, test dependencies must be provided in each test module. So if we want to use the same instances in an inner test suite, it has to be inherited:
#![allow(unused)] fn main() { mod inner { use test_r::{inher_test_dep, test}; use super::SharedDependency; inherit_test_dep!(SharedDependency); #[test] fn test3(shared: &SharedDependency) { assert_eq!(shared.value, 42); } } }
Dependency graph
Test dependency constructors can depend on other dependencies just like tests are. This allows defining a complex dependency graph, where each shared dependency is created in the correct order, and only when needed, and they got dropped as soon as no other test needs them.
The following example defines a third dependency (based on the above examples) which requires the other two to get constructed:
#![allow(unused)] fn main() { struct ThirdDependency { value: i32, } #[test_dep] fn third_dependency(shared: &SharedDependency, other: &OtherDependency) -> ThirdDependency { ThirdDependency { value: shared.value + other.value } } }
Tags
Assigning tags
Tests can be associated with an arbitrary number of tags. Each tag is global, and must be a valid Rust identifier.
Tags can be assigned to tests using the #[tag]
attribute:
#![allow(unused)] fn main() { use test_r::{tag, test}; #[tag(tag1)] #[tag(tag2)] #[test] fn tagged_test() { assert!(true); } }
Tagging entire test suites
It is possible to tag an entire test suite. This can be done by using the #[tags]
attribute on the module containing the tests,
or alternatively using the tag_suite!
macro:
#![allow(unused)] fn main() { use test_r::{tag, tag_suite, test}; mod inner1; tag_suite!(inner1, tag1); #[tags(tag2)] mod inner2 { // ... } }
The tag_suite!
macro is necessary because currently it is not possible to put attributes on non-inlined modules.
Running tagged tests
The purpose of tagging tests is to run a subset of the crate's tests selected by tags. To select tests by tags, use the
:tag:
prefix when passing the test name to cargo test
:
cargo test :tag:tag1
This example will run every test tagged as tag1
, but no others.
Selecting untagged tests
Sometimes it is useful to select all tests without a tag. This can be done by using the :tag:
prefix with no tag name:
cargo test :tag:
Selecting tests by multiple tags
Multiple tags can be combined with the |
(or) and &
(and) operators. The &
operator has higher precedence than |
. So the following example:
cargo test ':tag:tag1|tag2&tag3'
is going to run tests tagged as either tag1
or both tag2
and tag3
.
Benches
test-r
provides a simple benchmark runner as well, very similar to the built-in one in unstable Rust. The main differences are that test-r
allows defining async bench functions too (when the tokio
feature is enabled), and that benchmark functions also support dependency injection.
Defining benchmarks
To define a benchmark, just use the #[bench]
attribute instead of a #[test]
attribute on a function that takes a mutable reference to a Bencher
:
#![allow(unused)] fn main() { use test_r::{bench, Bencher}; #[bench] fn bench1(b: &mut Bencher) { b.iter(|| 10 + 11); } }
The benchmark framework will measure the performance of the function passed to the iter
method on the bencher.
If a benchmark needs shared dependencies, they can be added as additional parameters to the benchmark function. The &mut Bencher
parameter must always be the first one.
#![allow(unused)] fn main() { use test_r::{bench, Bencher}; struct SharedDependency { value: i32, } #[bench] fn bench2(b: &mut Bencher, shared: &SharedDependency) { b.iter(|| shared.value + 11); } }
Async benchmarks
When the tokio
feature is enabled, benchmarks can be async too. Just use the #[bench]
attribute on an async function that takes a mutable reference to an AsyncBencher
:
#![allow(unused)] fn main() { use test_r::{bench, AsyncBencher}; #[bench] async fn bench1(b: &mut AsyncBencher) { b.iter(|| Box::pin(async { 10 + 11 })).await; } }
Running benchmarks
Benchmarks are run by default as part of cargo test
, but they can be also separately executed using cargo bench
, or by passing the --bench
flag to cargo test
.
Per-test configuration
Some aspects of the test runner can be enforced on a per-test or per-suite basis using special attributes, instead of relying on command line options.
Enforce sequential execution
Parallelism of the test runner is normally controlled by the --test-threads
command line argument. It is possible to enforce sequential execution for all tests within a test suite by putting the #[sequential]
attribute on the module representing the suite:
#![allow(unused)] fn main() { use test_r::{sequential, test}; #[sequential] mod suite { #[test] fn test1() { assert!(true); } #[test] fn test2() { assert!(true); } } }
The rest of the tests in the crate will still be parallelized based on the --test-threads
argument.
Always or never capture output
Two attributes can enforce capturing or not capturing the standard output and error of a test. Without these attributes, the runner will either capture (by default), or not (if the --nocapture
command line argument is passed).
When the #[always_capture]
attribute is used on a #[test]
, the output will be captured even if the --nocapture
argument is passed. Conversely, the #[never_capture]
attribute will prevent capturing the output even if the --nocapture
argument is not passed.
Timeout
The #[timeout(duration)]
attribute can be used to enforce a timeout for a test. The timeout is specified in milliseconds as a number:
#![allow(unused)] fn main() { use test_r::{test, timeout}; #[timeout(1000)] #[test] async fn test1() { tokio::time::sleep(std::time::Duration::from_secs(2)); assert!(true); } }
This feature only works when using the async test runner (enabled by the tokio
feature).
Working with flaky tests
Tests can be sometimes flaky, and only fail sporadically or depending on the environment or hardware they run on.
test-r
provides two ways to handle flaky tests:
Marking tests as known to be flaky
By using the #[flaky(n)]
attribute, where n
is a number, we acknowledge that a test is known to be flaky, and the test runner will retry it up to n
times before marking it as failed.
#![allow(unused)] fn main() { use test_r::{flaky, test}; #[flaky(3)] #[test] fn flaky_test() { assert!(false); // This test will fail 3 times before being marked as failed } }
Ensuring tests are not flaky
The opposite appraoch is to ensure that a test is not flaky by running it multiple times. This can help in diagnosing flakiness and reproducing issues locally. The #[non_flaky(n)]
attribute will run a test n
times before marking it as succeeded.
#![allow(unused)] fn main() { use test_r::{non_flaky, test}; #[non_flaky(3)] #[test] fn non_flaky_test() { assert!(true); // This test will pass 3 times before being marked as succeeded } }
Dynamic test generation
Normally the test tree is static, defined compile time using modules representing test suites and functions annotated with #[test]
defining test cases. Sometimes however it is useful to generate test cases runtime. test-r
supports this using the #[test_gen]
attribute.
Test generators can be either sync or async (if the tokio
feature is enabled). The generator function must take a single parameter, a mutable reference to DynamicTestRegistration
. Dependency injection to the generator function is not supported currently, but the dynamically generated tests can use shared dependencies.
The following two examples demonstrate generating sync and async tests using the #[test_gen]
attribute:
#![allow(unused)] fn main() { use test_r::{add_test, DynamicTestRegistration, TestType, test_gen}; struct Dep1 { value: i32, } struct Dep2 { value: i32, } #[test_gen] fn gen_sync_tests(r: &mut DynamicTestRegistration) { println!("Generating some tests with dependencies in a sync generator"); for i in 0..10 { add_test!( r, format!("test_{i}"), TestType::UnitTest, move |dep1: &Dep1| { println!("Running test {} using dep {}", i, dep1.value); let s = i.to_string(); let i2 = s.parse::<i32>().unwrap(); assert_eq!(i, i2); } ); } } #[test_gen] async fn gen_async_tests(r: &mut DynamicTestRegistration) { println!("Generating some async tests with dependencies in a sync generator"); for i in 0..10 { add_test!( r, format!("test_{i}"), TestType::UnitTest, move |dep1: &Dep1, d2: &Dep2| async { println!("Running test {} using deps {} {}", i, dep1.value, d2.value); let s = i.to_string(); let i2 = s.parse::<i32>().unwrap(); assert_eq!(i, i2); } ); } } }
The generator functions are executed at the startup of the test runner, and all the generated tests are added to the test tree. The name of the generated tests must be unique. Each test is added to the test suite the generator function is defined in.
How to
This section contains a set of recommendations to solve various testing problems using a combination of test-r
and other third party crates.
- Tracing shows how to set up Tokio tracing for tests.
- Property based testing demonstrates how to use proptest framework with
test-r
. - Golden tests are a way to compare the output of a test with a reference file.
- GitHub Actions with JUnit explains how to run tests on GitHub Actions and show the results using the JUnit output format.
Tracing
Subscribers for Tokio tracing usually need to be set up once at the beginning of the application, and further calls to their initialization functions may cause panics.
With test-r
, the shared dependency feature can be used to set up the tracing subscriber once before the first test is executed, and keep it alive until the end of the test run.
The following example demonstrates this using the tracing-subscriber
crate:
#![allow(unused)] fn main() { use tracing_subscriber::fmt::format::FmtSpan; use test_r::{test_dep, test}; struct Tracing; impl Tracing { pub fn init() -> Self { tracing_subscriber::registry().with( tracing_subscriber::fmt::layer().pretty() ).init(); Self } } #[test_dep] fn tracing() -> Tracing { Tracing::init() } #[test] fn test1(_tracing: &Tracing) { tracing::info!("test1"); } #[test] fn test2(_tracing: &Tracing) { tracing::info!("test2"); } }
Property based testing
Property based testing using the proptest crate
The proptest library works well together with test-r
. There is no special requirements, just make sure to import test-r
's test
attribute before using the proptest!
macro to define the property based tests.
For example:
#![allow(unused)] fn main() { use test_r::test; use proptest::prelude::*; fn parse_date(s: &str) -> Option<(u32, u32, u32)> { todo!() } proptest! { #[test] fn parses_all_valid_dates(s in "[0-9]{4}-[0-9]{2}-[0-9]{2}") { parse_date(&s); } } }
Golden tests
Golden tests are comparing a previously saved output for a given test with the current output. This can be very useful to verify backward compatibility, for example. There are several golden testing libraries available in the Rust ecosystem.
The test-r
crate does not provide a built-in support for golden tests, but it should work with most of these libraries.
Golden tests with the goldenfile crate
The goldenfile crate is proven to work well with test-r
. For example the following helper function can be used to check the backward compatibility of reading serialized binary data with some custom serialize/deserialize functions requiring bincode codecs:
#![allow(unused)] fn main() { use bincode::{Decode, Encode}; use goldenfile::Mint; use test_r::test; fn serialize<T: Encode>(value: &T) -> Result<Vec<u8>, bincode::Error> { todo!() } fn deserialize<T: Decode>(data: &[u8]) -> Result<T, bincode::Error> { todo!() } fn is_deserializable<T: Encode + Decode + PartialEq + Debug>(old: &Path, new: &Path) { let old = std::fs::read(old).unwrap(); let new = std::fs::read(new).unwrap(); // Both the old and the latest binary can be deserialized let old_decoded: T = deserialize(&old).unwrap(); let new_decoded: T = deserialize(&new).unwrap(); // And they represent the same value assert_eq!(old_decoded, new_decoded); } pub(crate) fn backward_compatible<T: Encode + Decode + PartialEq + Debug + 'static>( name: impl AsRef<str>, mint: &mut Mint, value: T, ) { let mut file = mint .new_goldenfile_with_differ( format!("{}.bin", name.as_ref()), Box::new(is_deserializable::<T>), ) .unwrap(); let encoded = serialize(&value).unwrap(); file.write_all(&encoded).unwrap(); file.flush().unwrap(); } #[derive(Debug, PartialEq, Encode, Decode)] struct Example { value: i32, } #[test] pub fn example() { let mut mint = Mint::new("tests/goldenfiles"); backward_compatible("example1", &mut mint, Example { value: 42 }); } }
GitHub Actions with JUnit
With test-r
it is easy to generate JUnit test reports when running the tests on CI. Then the generated XMLs can be parsed by another GitHub Action step to provide a nicer test report in the GitHub UI.
The following example shows how to run the tests with test-r
and generate JUnit XMLs:
cargo test -- --format junit --logfile target/report.xml
This will generate one or more JUnit XML files in the target
directory.
The action-junit-report action can be used to parse the generated XMLs and show the results in the GitHub UI. The following example shows how to use it:
- name: Publish Test Report
uses: mikepenz/action-junit-report@v4
if: success() || failure() # always run even if the previous step fails
with:
report_paths: '**/target/report-*.xml'
detailed_summary: true
include_passed: true