May 31, 2020

Autogenerate Parameterized Tests in Rust with a Procedural Macro

Recently I found myself needing to parameterize a single test, written in a Rust codebase, over the contents of multiple arbitrary files contained within a single filesystem directory.

In Java we could write something like this:

import java.io.File;
import java.util.List;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;

public class Test {

    @Parameterized.Parameters(name = "{0}")
    public static List<Object[]> data() throws Exception {
        // ... return list of input files, one per test

    private final File file;

    public Test(File file) { this.file = file; }

    public test() {
      // ... run test logic using contents of file

As Rust is a compiled language, we must achieve this by writing a compile-time macro. Because we effectively need to execute arbitrary logic at compile-time, we’ll need to define what Rust calls a procedural macro, which accepts a TokenStream (essentially an AST) as input and returns an AST with which the macro call will be replaced.

This ability to accept an AST is especially useful because I also want to provide a list of filenames to exclude as known test failures, i.e.:

    // Known test failures:

I’m lazy, so first I searched around to see if I could get away with using a preexisting library. The most relevant result I could find was foresterre/parameterized. But this library generates tests for known expected inputs, whereas I needed tests for, as Señor Rumsfeld would say, unknown, unexpected inputs:

  • unknown, because I wanted the mere presence of a file in a given input directory to trigger the generation of a corresponding test; in other words, I shouldn’t have to modify a source file at all to add a new test to the test suite
  • unexpected, because I wanted to first pass the contents of each file to a binary executable and use the resultant output as the expected output for the test itself

After more fruitless searching, it was clear I had to whip up something myself, as described in the remainder of this post.

Create a Dedicated Subcrate for the Macro

I learned the hard way that I couldn’t just define the new macro in my crate under test; Rust requires that procedural macros be defined in their own crate. I added a lib/ directory to my main crate, then added files and directories as follows:

├── <name>-derive
   ├── Cargo.lock
   ├── Cargo.toml
   └── src
       └── lib.rs

Here, <name> is the name of the parent crate. Cargo.toml looks like this:

name = "<name>-derive"
version = "0.1.0"
authors = ["<your-name-here>"]
edition = "2018"

proc-macro = true

quote = "0.6.13"
syn = "1.0.30"
proc-macro2 = "0.4.30"

The [lib] section here tells Rust that this crate hosts procedural macro definitions.

Defining the Macro

The rest of this post will focus on the macro definition itself, which will go in lib.rs of the <name>-derive subcrate. I’ll get the boilerplate imports out of the way first:

extern crate proc_macro;
extern crate quote;
extern crate syn;

use proc_macro::{TokenStream};
use proc_macro2::{Span, Literal, Ident};
use quote::quote;
use std::iter::FromIterator;
use syn::{Token, Lit};
use syn::parse::{Parse, ParseStream};
use syn::punctuated::Punctuated;
use std::collections::HashSet;

One thing I should point out here: there is significant overlap in functionality among these libraries, and I spent a not insignificant amount of time being wrong about i.e. which Ident struct to use from which crate, who expected Lit versus Literal, etc. I’m also not sure what exactly the differences are between proc_macro and proc_macro2, but I do know that I burned plenty of time using the wrong one. This probably means I “read” the documentation too fast, go figure.

Anyway, here’s the start of our macro:

pub fn generate_tests(input: TokenStream) -> TokenStream {

  let test_input = syn::parse_macro_input!(input as TestInput);

  // ...

Rather than parse the input stream directly in the macro body, we define a struct TestInput with an implementation for the trait syn::Parse. As described above, the only input we intend to pass to the macro are filenames we know are known test failures; accordingly, our struct TestInput will contain those names within a set:

struct TestInput {
    known_test_failures: HashSet<String>

impl Parse for TestInput {
    fn parse(input: ParseStream) -> syn::Result<Self> {
        let content;
        syn::bracketed!(content in input);
        let inner_tokens : Punctuated<Lit, Token![,]> = content.parse_terminated(Lit::parse)?;
            known_test_failures: inner_tokens.iter().filter_map(
            |s| {
                match s {
                    Lit::Str(a) => Some(a.value()),
                    _ => {
                        println!("Warning: ignoring non-string literal in KTF list.");

In the Parse trait implementation, we expect the input ParseStream to start and end with brackets [ and ] (I say “we expect” but of course this specification is arbitrary according to my whims). Within those brackets, we expect a Punctuated<Lit, Token![,]>, which is a wonderful way to say “a list of literals separated by, and optionally terminated by, commas.”

We’ll get back a list of syn::Lit, which is an enum comprised of more specific type variants. Given that I only intend to pass a list of string literals to the macro, I tried to parse a list of syn::Lit::Str instead, but Punctuated requires a type, not a variant. This is no big deal, as we can simply filter out the Lit::Strs and throw away anything else we find, emitting a warning when we do so.

Returning to our macro definition proper, we add Part 1 as follows:

pub fn generate_tests(input: TokenStream) -> TokenStream {

    let mut test_input = syn::parse_macro_input!(input as TestInput);

    // Part 1
    let mut entries: Vec<String> = std::fs::read_dir("testdir").expect("dir")
        .map(|res| res.map(|e| e.path()))
        .filter_map(|p| {
            if !p.is_ok() { panic!("A PathBuf is wrapped in an error.") }
            let pathbuf = p.expect("pathbuf");
            let filename = pathbuf.file_name().expect("filename").to_str().expect("str");
            if !filename.ends_with(".txt") { panic!("A test file in the test directory doesn't end in .txt as it should.") }
            if test_input.known_test_failures.contains(filename) {
                println!("Ignoring known test failure: {}", filename);
                return None
            return Some(filename.to_string())

    if test_input.known_test_failures.len() > 0 {
        panic!("One or more KTFs didn't match an actual test file: {:?}", test_input.known_test_failures)

    // Part 2 (collapsed) ...

We now walk over the files in the testdir/ directory, panicking when we encounter any with malformed filenames or unexpected file extensions. We also ignore filenames contained within the set of known_test_failures, removing these from the set itself as we go so that we can assert that the set is empty after encountering each file in the test directory. This prevents us from silently introducing regressions into the test suite in the future, by i.e. deleting a known failing test file.

(I’m not going to claim this code for Part 1 is perfect, idiomatic Rust for the stated purposes – I’m sure it could be improved.)

Collapsing Part 1 and adding Part 2:

pub fn generate_tests(input: TokenStream) -> TokenStream {

    let mut test_input = syn::parse_macro_input!(input as TestInput);

    // Part 1 (collapsed) ...

    // Part 2
    let mut streams : Vec<TokenStream> = vec![];
    entries.iter().for_each(|test_filename| {
        let test_name = &test_filename[..&test_filename.len()-4];

        let filename = Literal::string(test_filename);
        let methodname = Ident::new(test_name, Span::call_site());
            (quote! {
                fn #methodname() {

Now that we have a list of test filenames that aren’t ignored as known failures, we want to generate a test method for each of them. This requires deriving a method name from the test filename by truncating the file extension, then representing both names as Rust syntactical expressions (Ident and Literal, respectively). We interpolate these into a test method definition using the quote! macro, which produces a TokenStream. At the very end we merge all of these into a single stream, and this is the macro’s return value.

Note that my test method definition simply calls another method common::test(...); if you plan to call this macro from multiple locations, or if your test logic is long/complex, this is a good approach as it keeps the output AST as simple as possible, the idea being that the less there is to debug, the better.

End Result

Thus inclusion of the macro like so in a file named (for example) test.rs:

    // Known test failures:

will effectively expand the compiled artifact for test.rs to include the compiled equivalents of the following source (assuming presence of files test_$NUM.txt on the filesystem, where $NUM is 0 through 4):

fn test_0() {
fn test_3() {
fn test_4() {