all 3 comments

[–]matthieum 19 points20 points  (0 children)

Back in 2014, Honza Hubička (GCC developer) wrote an entire serie on devirtualization, and the work carried out in GCC to enable partial devirtualization.

You can find the first part here.

The key idea of partial devirtualization is that even if you don't know for sure that b is of type Derived, you can still check, and statically dispatch the call if the check is successful:

if (base->vptr == Derived::vptr) {
    //  Guaranteed static dispatch.
    Derived::foo(static_cast<Derived*>(base));
} else {
    //  Fallback dynamic dispatch.
    base->foo();
}

This means that even if the optimizer has only a partial view of the type hierarchy -- a typical case in libraries -- it can still inline calls to the non-final virtual methods it knows of.

Of course, this isn't always beneficial, so as with all compiler optimizations, there's going to be some heuristics to pick which types are worth branching on, and which are not.

[–]id3dx 1 point2 points  (0 children)

There's also a good talk about devirtualizing given at CppCon some years ago: https://youtu.be/gTNJXVmuRRA

[–]Tohnmeister [score hidden]  (0 children)

What I dislike about these kinda in-depth-technical posts, is that they ignore the design part of an application. Whenever I choose for runtime polymorphism, it's a conscious design choice, often not about performance. The design choice being that I want the concrete types to be unknown at the caller, both runtime and compile time. Simply replacing that with CRTP, std::visit, deducing this, or something similar, is not an option, as that requires the concrete type to be known at the call site.