Running unit tests
This document is a practical guide to setting up and using the Lind testing infrastructure. It outlines the steps needed to run the test suite, execute unit tests, and understand the results produced by the test suite, and how to contribute new tests to the framework.
Since Lind is currently limited to the AMD64 architecture, Docker is used to provide a consistent and controlled testing environment across different host systems. You can install Docker from its website.
Testing Workflow
- Clone the repo using
git clone https://github.com/Lind-Project/lind-wasm.git
- Change directory to repo
cd lind-wasm
- Build Docker Image
docker build -t testing_image -f .devcontainer/Dockerfile --build-arg DEV_MODE=true --platform=linux/amd64 .
- Run the image
docker run -it testing_image /bin/bash
- Build toolchain (glibc and wasmtime)
bazel build //:make_glibc //:make_wasmtime
- Run the test suite
(This will run the whole test suite. Use
bazel run //:python_tests
scripts/wasmtestreport.py --help
to list available arguments and flags) Note: Pass test suite arguments usingFor example:bazel run //:python_tests -- <wasmtestreport arguments>
bazel run //:python_tests -- --timeout 10
What test suite does
-
Test Case Collection: Scans
unit-tests
folder for.c
files. -
Filtering: Applies include/exclude filters (
--run
,--skip
, andskip_test_cases.txt
). - Test Execution: Compiles and executes each test case twice, with native gcc and with lind-wasm, and records the output. (note: gcc is skipped for tests with expected output fixture, and for tests with non-deterministic output)
- Comparing Outputs: Marks test as successful, if outputs match (note: non-deterministic tests always succeed, if compilation and execution succeeds)
- Reporting: Test results are written to a JSON- and an HTML-formatted report in the current working directory. The reports include a summary of the full test run, and status, error type, and output of each test case.
Error Types
The output will show the total number of test cases, along with counts for successes, failures, and each of the following error types:
- "Failure_native_compiling": Failed during GCC compiling
- "Failure_native_running": Failed while running GCC compiled binary
- "Native_Segmentation_Fault": Segmentation Fault while running GCC binary
- "Native_Timeout": Timed Out during GCC run
- "Lind_wasm_compiling": Failed during compilation using lind-wasm
- "Lind_wasm_runtime": Failed while running lind-wasm compiled binary
- "Lind_wasm_Segmentation_Fault": Segmentation Fault while running wasm binary
- "Lind_wasm_Timeout": Timed out During Lind Wasm run
- "Output_mismatch": Mismatch in GCC and Wasm outputs
- "Unknown_Failure": Unknown Failure
The outputs are split into deterministic and non-deterministic based on how the lind-wasm outputs are compared to the native gcc output.
Directory Structure
tests/unit-tests/
: Folder containing all.c
test cases.expected/
: Directory under each test folder for expected output files.testfiles/
: Extra files needed by tests, copied into Lind FS.
How to add test cases
To add test cases, a file with .c extension containing c code can be added to the appropriate folder in the tests/unit-tests folder. During the test suite run, the test case will be picked up and run. If the outputs of the file can be directly compared, i.e. contents of gcc run == contents of lind-wasm run, that would be enough
Any failure in compiling or running using gcc or lind-wasm is considered a failure. Mismatch in native (gcc) and wasm outputs are also considered a failure.
Example Combined Usage
bazel run //:python_tests -- \
--generate-html \
--skip config_tests file_tests \
--timeout 10 \
--output results_json \
--report test_report
This will:
- Skip specified folders
- Use a 10-second timeout
- Save output as
results_json.json
- Generate a report
test_report.html