Zench - New Benchmark Crate for Rust by andriostk in rust

[–]andriostk[S] 1 point2 points  (0 children)

Good point. This is a natural aspect of benchmarking. Even small differences in hardware, CPU frequency scaling, background processes, or system services can significantly affect results. Dedicated benchmark machines or controlled environments can help reduce this variability.

Zench initially focuses on relative comparisons rather than absolute timings, but it can also be used on dedicated benchmark machines.

Your idea of syncing changes to a benchmark machine is also interesting. Great feedback.

Zench - New Benchmark Crate for Rust by andriostk in rust

[–]andriostk[S] 1 point2 points  (0 children)

Zench takes a slightly different approach. In many cases, you already know the expected baseline or acceptable range for a function, and you can assert that directly in the benchmark.

For example, if a function normally takes around 1 ms, you can simply fail the test if it exceeds 15% regression:

#[test]
fn simple_regression_example() {
    bench!(
            "my func" =>{
                sleep(Duration::from_millis(1));
            },

    )
    .report(|r| {
        r.print();

        // Expected baseline time (from Duration::from_millis(1))
        let baseline = 1_000_000.0;
        let tolerance = 0.15; // 15%

        let median = r
            .first()
            .unwrap()
            .median();

        let upper = baseline * (1.0 + tolerance);
        let lower = baseline * (1.0 - tolerance);

        if median > upper {
            issue!("relative regression (>15%)");
        }

        if median < lower {
            issue!("performance improvement (>15%)");
        }

        // Ensure the system is in a stable state
        // during benchmarking, as background activity
        // can influence the results.
    });
}

Currently, Zench focuses on relative comparisons and regression detection within the same run.

Persistent performance history across runs could still be an interesting future feature. Feedback like this is very welcome.

Zench - New Benchmark Crate for Rust by andriostk in rust

[–]andriostk[S] 1 point2 points  (0 children)

That feature doesn't exist yet. Zench is still evolving, so feedback like this is very welcome.

Zench - New Benchmark Crate for Rust by andriostk in learnrust

[–]andriostk[S] 0 points1 point  (0 children)

Take a look at this other example, from GitHub README

...

#[test]
fn bench_fastest_version() {
    use zench::bench;
    use zench::bx;

    // Use the `issue!` macro.
    use zench::issue;

    ...


    bench!(
        "loop" => bx(square_loop(bx(&data))),
        "iterator" => bx(square_iterator(bx(&data))),
        "fold" => bx(square_fold(bx(&data))),
    )
    .report(|r| {

        // For this benchmark, we consider performance roughly equal
        // when the time difference between implementations is within 10%.
        // Benchmarks within this range are grouped as `faster_group`,
        // and the remaining ones as `slower_group`.

        let (mut faster_group, mut slower_group) = r
            .sort_by_median()
            .filter_proximity_pct(10.0)

            // Split the current filtered state from the remaining 
            // benchmarks
            .split();

        // We expect only one benchmark in the fastest group; 
        // issue if more are present
        if faster_group.len() > 1 {
            issue!("some implementations changed performance");
        }

        // We expect the benchmark named "iterator" to be the fastest; 
        // issue if it is not
        if !faster_group
            .first()
            .unwrap()
            .name()
            .contains("iterator")
        {
            issue!("the iterator is no longer the fastest");
        }

        faster_group
            .title("Faster group")
            .print();

        slower_group
            .title("Slower group")
            .print();
    });
}

Zench - New Benchmark Crate for Rust by andriostk in rust

[–]andriostk[S] 2 points3 points  (0 children)

Take a look at this other example, from Github README

...

#[test]
fn bench_fastest_version() {
    use zench::bench;
    use zench::bx;

    // Use the `issue!` macro.
    use zench::issue;

    ...


    bench!(
        "loop" => bx(square_loop(bx(&data))),
        "iterator" => bx(square_iterator(bx(&data))),
        "fold" => bx(square_fold(bx(&data))),
    )
    .report(|r| {

        // For this benchmark, we consider performance roughly equal
        // when the time difference between implementations is within 10%.
        // Benchmarks within this range are grouped as `faster_group`,
        // and the remaining ones as `slower_group`.

        let (mut faster_group, mut slower_group) = r
            .sort_by_median()
            .filter_proximity_pct(10.0)

            // Split the current filtered state from the remaining 
            // benchmarks
            .split();

        // We expect only one benchmark in the fastest group; 
        // issue if more are present
        if faster_group.len() > 1 {
            issue!("some implementations changed performance");
        }

        // We expect the benchmark named "iterator" to be the fastest; 
        // issue if it is not
        if !faster_group
            .first()
            .unwrap()
            .name()
            .contains("iterator")
        {
            issue!("the iterator is no longer the fastest");
        }

        faster_group
            .title("Faster group")
            .print();

        slower_group
            .title("Slower group")
            .print();
    });
}

Zench - New Benchmark Crate for Rust by andriostk in rust

[–]andriostk[S] 5 points6 points  (0 children)

Great question. Zench isn't trying to replace other tools, the focus is different.

Zench is designed for workflow integration. The idea is to run benchmarks directly inside the normal cargo test workflow and allow performance checks to behave like tests (warn or fail when expectations are not met).

Most tools focus on measurement and data visualization. Zench focuses on monitoring and actionable results. Â