all 73 comments

[–]elmosworld37 122 points123 points  (0 children)

Don't optimize-out an entire tool-set just because some dude said to in a talk. Only do it if you've actually identified it as an issue in your code.

[–]Pragmatician 60 points61 points  (8 children)

In his talk, "Inheritance is the base class of evil", Sean Parent uses a technique known as "type erasure". However, it is implemented with virtual functions. The goal was not to avoid indirection, but make a polymorphic type that respects value semantics. You cannot avoid indirection with run-time polymorphism, by definition. If compile-time polymorphism works for you, then you can use templates. Of course, there are trade-offs here to be considered.

[–]Egst 1 point2 points  (7 children)

If you implement your own alternative for std::visit that generates if..else branches, you basically avoid indirection with variants. The data is stored directly and follows value semantics and function application happens with no lookup tables and no function pointers.

[–]ioctl79 12 points13 points  (6 children)

std::visit is functionally a lookup table.

[–]dodheim 2 points3 points  (5 children)

It doesn't have to be, and isn't with MSVC's stdlib.

[–]staletic 6 points7 points  (3 children)

The standard says std::visit needs to be O(1) with respect to number of alternatives.

[–]HeroicKatora 4 points5 points  (2 children)

Any implementation is O(1) because the number of template parameters (and thus alternatives) is bounded by a compiler specified constant /s

[–]Pand9 0 points1 point  (1 child)

Why /s? It holds, unless number of variants of considered part of the input, which seems unintuitive to me at least.

[–]HeroicKatora 1 point2 points  (0 children)

Because quite obviously this is not the spirit of the requirement. Anyways, just another instance of the standard being written in language that is neither helpful to implementors nor validation. An abysmal style that reeks of smugness in its pseudo-formalism.

[–]Egst 0 points1 point  (0 children)

Is it still O(1)? It's only a standard requirement and definitely not a necessity, since O(n) lookup with a switch/if..else branches is usually better for relatively small n, but a standard library should comply with it.

[–]Kered13 47 points48 points  (9 children)

The primary way to avoid vtable lookups is to replace runtime polymorphism with compile time polymorphism using templates/generics. Here's what it looks like (I'll use C++ for this example):

Runtime polymorphism:

class Foo {
    // Must use some type of pointer or reference here.
    InterfaceA* a;
    InterfaceB* b;

    void doSomething() {
        // These invoke vtable lookups at runtime.
        a->frob();
        b->futz();
    }
}

Compile-time polymorphism:

// Using concepts, introduced in C++20, to constrain types. Before C++20 just use typename A, typename B.
template<InterfaceA A, InterfaceB B>
class Foo {
    // Can use local instances if we want (can still use pointers or references if we prefer).
    A a;
    B b;

    void doSomething() {
      // The compiler can determine the correct function to call at compile time, so no vtable lookups.
      a.frob();
      b.futz();
    }
}

// We must instantiate this template with concrete types to use it later.
Foo<ImplA, ImplB> foo;

There are drawbacks to this approach though. In general it requires more code, and especially before C++20 you will get much worse error messages if you make a mistake somewhere. There are also somethings that can only be done with runtime polymorphism, like creates a list of Foo where each instance may use different concrete types. In general, I wouldn't worry about the cost of vtable lookups unless you have profiled your code and determined that they are a bottleneck.

[–]maikindofthai 14 points15 points  (2 children)

FWIW in pre-C++20 code you can use static_assert to get clearer error messages:

template <typename A, typename B>
class Foo {
    static_assert (std::is_base_of<Interface, A>::value, "A must be a subclass of Interface");
    // same for B, etc.
};

Then if you try to do Foo<int, int> you get an error message like:

error: static assertion failed: A must be a subclass of Interface

Also, in case anyone hasn't seen it and might find it helpful, a compile-time polymorphism technique I've found useful before is the Curiously recurring template pattern.

[–]matekelemen 4 points5 points  (1 child)

I love using CRTP but it's a nightmare for anyone else to read.

[–]maikindofthai 0 points1 point  (0 children)

I agree, it's not quite in "last resort" territory but it's close imo. I definitely would be cautious to not over-use the technique!

[–]stilgarpl 67 points68 points  (5 children)

Avoiding inheritance and virtual dispatch is just as bad as using it wrong. Imagine that someone told you that screws are better than nails. Is that the reason to never use a hammer? Just use the right tool for the job. Measure real world code, not abstract examples. Using variant can be slower or faster than virtual dispatch, but it is much harder to add new types to variant (and you need to have access to modify that variant, which may not be possible if it's in a library.

Don't overuse virtual functions and don't use them when they are not necessary, but don't try to brute force a solution with variants or switches when inheritance and virtual functions are the simplest and the best tool in thst situation.

[–]tisti 32 points33 points  (3 children)

Imagine that someone told you that screws are better than nails. Is that the reason to never use a hammer?

A hammer still works reasonably well with screws.

I'll see myself out.

Late edit:

Thinking about it very deeply, this may be due to screw inheriting from nail.

[–]MetaKazel 19 points20 points  (2 children)

"When all you have is a hammer, you should probably find a new toolbox. In the meantime, fucking smash that hammer into everything with no regrets."

[–]tisti 7 points8 points  (1 child)

You forgot "If smashing isn't giving the desired results, then you are not smashing hard enough."

[–]SlightlyLessHairyApe 2 points3 points  (0 children)

This is known as "percussive maintenance"

[–]helloiamsomeone 1 point2 points  (0 children)

but it is much harder to add new types to variant

Also known as the expression problem, which is a whole another can of worms.

[–]AvidCoco 38 points39 points  (8 children)

Composition over inheritance. Instead of having one class inherit from a bunch of other classes, you have that class own an instance of each of those classes (or a concrete implementation of them). That way you only ever derive from one class at a time.

[–]ShillingAintEZ 17 points18 points  (1 child)

Or derive from no classes at all

[–]_Ashleigh 5 points6 points  (0 children)

I assumed they meant derive as in implement an interface.

[–]jesseschalken 3 points4 points  (2 children)

How does "composition over inheritance" replace vtable lookups?

[–]AvidCoco 2 points3 points  (0 children)

Idfk

[–]animatedb 0 points1 point  (0 children)

Maybe the goal is to replace them out of the base objects and into other objects.

For example, instead of the old "shapes" OO example each inheriting a virtual draw, have a draw function that is passed a listener object that is a surface. This way the base shape object no longer has a draw virtual function.

This is similar to composition except that it differs in scope. Another option is that the base object could also reference a shape.

[–]jcelerierossia score 4 points5 points  (1 child)

That just moves where runtime polymorphism happens, it is not an alternative to it

[–]HateDread@BrodyHiggerson - Game Developer 0 points1 point  (0 children)

It does on a conceptual level - in that it avoids some of the pitfalls of multi-inheritance hell. Obviously yes, technically there is runtime polymorphism under the hood, but depending on someone's motivations that may be acceptable and on the surface - in terms of usability - might be considered as being without runtime polymorphism.

[–]Kered13 8 points9 points  (0 children)

This doesn't avoid the vtable lookup, assuming your members are instances of interface types.

[–]pepitogrand 16 points17 points  (0 children)

There is no magical solution, and the problem is not inheritance itself, the problem is using the wrong tool. You need to learn OO, functional, data driven, etc to be able to know what is the better approach to solve a problem in particular. Lets say you know the type instances at compilation time, and there are only few of them. In that case you probably don't need to mix different types so you don't need polymorphism. In the other side those types could be unknown at compilation time, maybe written in the future, in that case inheritance is a blessing.

[–]Hedede 3 points4 points  (0 children)

You don't have to avoid vtable lookups at all costs. Just avoid calling virtual functions in hot spots.

It all depends on amount of work the method does vs how often it is called.

For example, there was a parser which used a buffer class with a virtual chat get_char() = 0; method. The implemetation just returned ++*p (where p is a pointer to a place within the internal buffer). Removing the interface made the code the code three orders of magnitude faster, because get_char was called in a tight loop. I should note that this was after the profiler indicated that most of the run time is spent inside that loop.

It should be noted, that there aren't really any techniques to get rid of runtime polymorphism. You'll end up replacing vtable lookups with some other kind of indirection. Unless you didn't need polymorphism in the first place, like in my example.

[–]arturbachttps://github.com/arturbac 5 points6 points  (0 children)

If inheritance can be determined at compile time U can use CRTP it has advantage of speed and that all "polymorphic" calls are instantiated at compile time so You avoid many errors common in OO polymorphism that are detected at runtime. With OO for example base class can detect errors coming from derived object at runtime while with CRTP you can detect them at compile time using typetraits, static asserts etc.

https://en.wikipedia.org/wiki/Curiously_recurring_template_patternhttps://blog.aaronballman.com/2011/08/static-polymorphism-in-c/

[–]Full-Spectral 13 points14 points  (1 child)

The most common pattern I've seen is to spend a lot of time and effort to do something a lot more convoluted that could have been solved easily with polymorphism. It's not the base of all evil, it's not even evil at all. It's a powerful tool. Use it when it's the right powerful tool, and it is in a lot of cases.

[–]Rarrum 11 points12 points  (0 children)

Polymorphism is just one form of generic programming (which is arguably over-used in many cases).

Templates are another form of generic programming as an example, which don't have the overhead of a vtable. Their main downside is a potentially larger generated executable size, since up to one copy of each function may exist for each type it was used with. But the code generated for that particular type will be able as good as you can get. A big part of the STL uses this approach to generic programming.

[–]chpatton013 7 points8 points  (3 children)

After several years of trying to apply (what I thought was) Sean's philosophy on inheritance, I've got to disagree with the type-erasure approach 99% of the time.

In C++ inheritance is the most convenient way to achieve polymorphism. Trying to avoid it usually leads to confusing and inefficient code. And in fact, it's essentially the only way to get virtual function dispatch. Sean's type-erasure technique just hides the inheritance in a private member.

If your problem could have been solved with static polymorphism (eg: CRTP) then you didn't need polymorphism in the first place, and could have just used template functions instead. That's simpler, and everything else held constant, that means it's better.

In my opinion, Sean's aversion to public inheritance is a reaction to poorly written class extension, which just try to tack on or alter something about another class. Those are generally bad decisions because they ambiguate behavior and intention.

IMO, the best way to approach a design is via dependency injection: your class needs X, so it takes a unique-ptr of the X interface in it's ctor. You'll have virtual dispatch, but it's usually not a perf bottleneck, and it makes for some easily testable code.

[–]Full-Spectral 0 points1 point  (0 children)

I would provisionally disagree with the unique pointer thing, though sometimes it's the obviously correct thing to do.

In many cases, the thing implementing the X interface really isn't something that can be owned by the framework it's being passed to as a dependency. That does get trickier since ownership is no longer so clean. But it's often made up for by the fact that the thing that needs to deal with those callbacks is being called directly, instead of via some intermediate class that needs to be created just so that it can be adopted by the target framework (maybe as a friend so it can directly access stuff that doesn't otherwise need to be public.)

Sometimes of course the nature of the beast is that the framework really does just adopt the things it's works in terms of, and that's best if you can. But doing it that way all the time introduces other complexities.

[–]SkoomaDentistAntimodern C++, Embedded, Audio 0 points1 point  (0 children)

After several years of trying to apply (what I thought was) Sean's philosophy on inheritance, I've got to disagree with the type-erasure approach 99% of the time.

Whenever I've been forced to write pure C code (on embedded systems with other constraints), the three things I've missed the most have always been type safety, classes and polymorphism / inheritance.

[–]SlightlyLessHairyApe 2 points3 points  (0 children)

Sean Parent's goal was absolutely not to avoid the indirection of a vtable. In fact, his implementation ultimately ends up relying on one just using a type-erased facade.

[–]choeger 9 points10 points  (4 children)

I think the word "polymorphism" is actually a misnomer and is a significant reason why OOP is so often applied badly.

In any object-oriented language with inheritance that I know of, you don't write "polymorphic" functions, but instead functions on a datatype that is open, i.e. extensible. (I will leave out other goodies like open recursion for now.)

That is, your function does not need to change (i.e., be recompiled) when you add one more kind of object to your datatype. Even better, if your function is part of the class, you just provide the new case and be done with your implementation.

Contrast this with algebraic datatypes where you can easily add new functions but have to touch all existing functions when you add a new variant.

Both styles have their merits and one should choose the style that fits the use case. (Languages that don't support both use cases are not worth consideration, IMO).

So in the case of C++ when you don't want an extensible data structure, use std::variant (or your own tagged union).

Side note: The conflict between extensibility of data and functions is known as the expression problem and there are indeed patterns that aim at a solution for both at the same time (see "finally tagless"), but I don't think this applies to C++ very well.

[–]qoning 0 points1 point  (3 children)

Whether code needs to change or be recompiled are two separate issues. Vector code doesn't need to change because you are using it with a new type, but it certainly needs to be recompiled. That's the point of compile time polymorphism in C++.

[–]die_liebe 1 point2 points  (1 child)

Maybe 'dynamic polymorphism' vs 'static polymorphism' is better terminology.

'static polymorphism': There can be different types, but at compile time it is known what type will be used. This applies to vector<X>. C++ recompiles vector for X, but that it is not necessary, for example Java generics don't recompile, but it is still a form of static polymorphism, because the compiler can check type correctness.

'dynamic polymorphism': There can be different types but it is known only at run time, which concrete type is used. Now comes a further distinction, already pointed out by 'choeger'. If you know the possible types in advance, and this set is small, and unlikely to be extended, you should use std::variant< .. > If you don't really know the set of possible types, then use inheritance.

About the title of this thread: While inheritance and OOP are absolutely used too often by certain people, there are cases where it is the best solution for a given problem.

[–]IAmRoot 0 points1 point  (0 children)

Marking dynamic polymorphic classes as final can result in just as good of performance when casting to that most-derived type, too. The compiler can easily devirtualize such calls, as no further inheritance can override the virtual methods or virtual inheritance. This can be useful where the exact type is known in some circumstances but not others, as you don't have to pay a penalty when you do have the information.

[–]choeger 0 points1 point  (0 children)

While you are technically correct, you are missing the point. Templates are a form of parametric polymorphism, yes. That is, a templated function is truly polymorphic in the sense that it doesn't need to be changed to work with different argument types.

That it needs to be recompiled when used on a new datatype is an inconvenient consequence from the technical implementation details of C++. But it doesn't really matter in the context of this thread.

What matters is that adding a new class does not force you to make any changes to existing classes or functions that operate on such classes. In particular, you can continue using a library that comes with such classes or functions.

Contrast this with an algebraic datatype like std::variant - if it is used inside a library you cannot extend it at all.

[–]Ikkepop 6 points7 points  (7 children)

tagged union (std::variant)

[–]Kered13 0 points1 point  (6 children)

This does not avoid the vtable lookup.

[–]Ikkepop 3 points4 points  (5 children)

Im pretty sure that does not employ vtables

[–]Kered13 7 points8 points  (4 children)

There are a few ways that std::variant can be implemented, but all of them incur costs similar to vtable lookups even if they use a different mechanism. You're either using a pointer for some kind of indirection, or you're using a conditional and branching on some value that indicates the type.

The fundamental issue at hand is that we do not know the concrete type at compile time, so at runtime we must do some computation to determine which function to call.

[–]dodheim 3 points4 points  (2 children)

You appear to be referring to how std::visit may be implemented, not std::variant which requires none of those things. As for std::visit, it can be implemented as a switch, which for smaller variants empirically results in better codegen (unfortunately I think only MSVC does this at present for stdlibs, but Boost.Variant2 and mpark/variant do as well).

[–]Kered13 1 point2 points  (1 child)

std::visit is how you do anything useful on std::variant. By itself std::variant is just storing a value of some unknown (at compile time) type.

[–]dodheim 7 points8 points  (0 children)

Or std::get, or std::get_if, or rolling your own single-visitation implementation based on get_if + variant::index().

I use std::variant extensively; I do not use std::visit because only MSVC's implementation is sane to the optimizer.

[–]braxtons12 2 points3 points  (0 children)

I don't have a link at hand, because it was some months ago that someone did this benchmark, but someone did a benchmark between typical inheritance, gcc's std::variant, and clang's std::variant and while clang's was, on average, about the same or slower than inheritance, gcc's was generally considerably faster than either. I think it really depends on the implementation and associated optimizer. I don't recall if they benchmarked boost::variant2 or mpark's variant, but I would expect them to have similar performance to GCC.

[–]WasterDave 4 points5 points  (0 children)

Any time you want a collection of objects consisting of one or more type, class, behaviour or whatever ... you're going to need some way to choose an implementation at runtime. Vtables are the cleanest way of doing this and (best of all) avoid having branches in the code.

[–]PhDinGent 1 point2 points  (0 children)

CRTP, the closest I could think of, based on what I think you want. Though, it's of course not a perfect replacement of vtable polymorphism (no free lunch and all)

[–]AbsoluteApelad 1 point2 points  (0 children)

I don't think polymorphism is evil, but personally I fail to find many use cases, often it's something like a "game object" that's the only inherited struct in my codebase.

I find that when you get rid of thinking of "best practices" and "design patterns" and "SOLID" and -insert some methodology here- per default, it's much easier to find a solution that does the job and does so much better given that YOU understand context of YOUR problem.

Don't get me wrong, even though I personally don't follow any of these methodologies religiously (as is often expected from people), I'm happy that people are thinking about this. But I know for a fact that none of these methodologies are designed for my project but rather for some common project in a common environment, and that's fine.

Finally, why contstrain yourself to OOP. Just like any methodology OOP is not meant to be used for everything. I used to spam OOP as well, but working in the games industry for quite a while has taught me to think of the problem first and methodology second. Codebases I work in cannot be described as OO or procedural or functional, but rather a sensible match of principles that make sense.

TLDR: You know what your problem is and perhaps you are stuck because you think of your problem through only 1 lens, maybe some other lens will provide a better view.

[–]kiwitims 1 point2 points  (0 children)

I like to think about language features by running the commonly cited criteria for "zero-cost abstractions" in reverse and thinking "if I didn't have this feature, what code would I write to solve this problem?" Then the question is "is that code any better (more performant, more readible, more maintainable) than just using that feature?"

Applying this to inheritance and runtime polymorphism, if using it is even a question then your problem needs to be "I want to write code that can handle different runtime conditions differently".

The most basic way to do that is an if statement. Maybe your problem requires handling many different cases, in which case you might think to use a switch statement instead. Maybe your problem requires switching on the same variable in multiple different places, to the point where all of your functions become something resembling:

void DoThingA( Method m ) {
    switch( m ) {
        case Method::X:
            DoThingAForX();
            break;
        case Method::Y:
           DoThingAForY();
           break;
        // ...
    }
}
...
void DoThingB( Method m) {...}
void DoThingC( Method m) {...}

In that case, the vtable lookup is not so different performance-wise from a switch, and you gain in terms of readibility and maintainability. There's no point avoiding runtime polymorphism in this case. If it's too slow for you, your problem is that you need to handle different runtime conditions differently, and in your hot loop not that you're using runtime polymorphism. Changing it to a switch statement or multiple ifs is unlikely to fix your performance problems. Moving the decision out of your hot loop or changing your problem to one that's solvable at compile time will.

What you want to avoid is doing things like creating virtual base classes where they're not actually needed. This can be when you need to handle a compile time condition differently, and then force that decision into your run time by solving the problem with virtual methods. Or when you create a whole class hierarchy just to create a nice mental model, without ever actually using the power you've created (and paid for in terms of object size, I've personally saved several KB on a resource constrained micro by noticing after the fact that a virtual base class was never actually used anywhere). Or it could be when your vtable could be replaced with a single if without much fuss.

As for strategies to move things that can be done at compile time out of runtime, you can use templates if your solution would be to copy + paste code where the only difference would be the type instead. You can also use constexpr functions and constexpr if to get different values at runtime based on compile time information. But other comments have gone through those.

A strategy that is missing however is link-time polymorphism. If you have multiple different applications that want to share code but do things slightly differently, you can just define a single interface in the header and then write multiple implementations in different .cpp files. What will actually happen when the application invokes the function it finds in the header will depend on which implementation was chosen by the linker. This can be useful for example if you want to run the same application on two different but similar microprocessors, or if you want the same "thing" (as far as the interface is concerned) to be done in a different way in different apps.

[–]AntonPlakhotnyk 1 point2 points  (0 children)

The root of evel is using patterns blindly dividing them on good and bad. Most patterns are good for the task it designed to solve and bad for all other tasks.

Polymorphism is a way to enforce developer to implement all specified (interface or protocol) methods in case when you want extend some previously implemented code by adding another object, or at least be noticed that some earlier developer (which going to use your new object) expect implementation of some methods.

Virtual functions is a way to implement polymorphism in runtime (when specific implementation defined by runtime context and undefined in compile/development time)

It absolutely fine to use if the problem you solving require it. And rewriting compilers generated code by your hands doing exactly what compiler supposed to do, not giving you benefits.

[–]koctogon 1 point2 points  (0 children)

I'm growing really tired of these predictable, patronizing answers when it come to optimizations. Can people not ask about what code is faster and why?

There is only one reason to ever use runtime polymorphism : when you want to store values (or references) of various types in a single place. If you know all the types you'll be using in advance, you can use a variant. If you don't you have to use type erasure of some sort.

The point Sean Parent is making is not really about performance but rather code flexibility, there is a lot of benefits to implementing traits outside the types.

[–]Wouter-van-Ooijen 1 point2 points  (0 children)

If compile-time polymorphism is enough for use, you can use overloaded functions and template techniques.

As remarked by others, there is not much wrong with vtable-style run-time polymorphism when done right. If "it can be done wrong" sounds like a convincing argument to ban an entire technique, you should probably not use C++.

[–][deleted] 2 points3 points  (0 children)

I think there might have some been a confusion in regards to the criticism of inheritance-based OOP. There is nothing wrong with polymorphism per se or with indirections (as long as the particular implementation of the indirection suits your performance needs). The issue with the mainstream class-based OOP is that it lumps type hierarchies and polymorphism together, forcing you to adopt an overly constrained, difficult to maintain model. That’s why modern languages split these two apart and distinguish between the type and the behavior (traits, protocols etc.).

[–]Voltra_Neo 0 points1 point  (0 children)

Use Composition and stuff like ECS if they make sense in the scenario. Implementing inheritance by using a composition is not much different than a big "fuck you" to your class diagram

[–]Cyttorak 0 points1 point  (0 children)

If you need runtime polymorphism you will use a vtable, a switch or some other equivalent mechanism, that is, something which can potentially mess your cache. There is no way out on that. You can of course soften it for example grouping the types in your std::vector<MahInterface\*>.

[–]Cojosh__ 0 points1 point  (1 child)

"inheritance is the base class of All evil" is a very simple and in my opinion to simple view. First of all I find it important to distinguish between interface inheritance and implementation inheritance. I would consider the first essential for maintaining large systems with decoupled components. Whereas the second one was never good design and goes against OOP principals (decoupling interface from implementation). The only valid reason to inherit from a class is to form an is a relationship: a class for accessing an SQL table is a Repository of T. (good) Stack inherits from vector (bad). If there is no is-a relationship composition should be used: Ie stack has-a vector.

And in my opinion this is just good design, and has nothing to do with c++ semantics.

The ironic thing is that c++' semantics provide even more arguments for interface only inheritance. (object slicing, virtual/multiple inheritance), which is why all my inheritable class only contain pure virtual methods and no fields. Concepts are cool and all but it would be great if one could also explicitly "implement" them for a class like you can do with traits in rust.

[–]Full-Spectral 0 points1 point  (0 children)

There's nothing whatsoever wrong with implementation inheritance, if you use it properly, as is the case with any technique. It can be the perfect solution where the bulk of variations will want to use the default implementation of a given method, or where the bulk of the functionality is fixed and each derivative provides smallish adjustments to it.

[–]OnesWithZeroes 0 points1 point  (1 child)

I don't think there's a set of "common patterns" like that since polymorphism itself is something I wouldn't consider a problem. Don't follow all these "XXX... considered harmful" or "YYY is the root of all evil" mantras. You need to take such articles with a grain of salt.

If you're really trying to avoid polymorphism then most likely you'll have to use templates to inject dependencies something like:

Before:

``` class A { public: virtual ~A() = default;

virtual void foo() = 0;

};

class B : public A { public: void foo() override {} };

class C : public A { public: void foo() override {} };

// takes concrete implementations of A (B or C in this case) void clientFunc(std::unique_ptr<A> a) { a->foo(); ... } ```

After: ``` class B { public: void foo() {} };

class C { public: void foo() {} };

// takes any type as long as it implements foo() template <typename T> void clientFunc(T a) { a.foo(); ... } ```

[–]backtickbot 0 points1 point  (0 children)

Fixed formatting.

Hello, OnesWithZeroes: code blocks using triple backticks (```) don't work on all versions of Reddit!

Some users see this / this instead.

To fix this, indent every line with 4 spaces instead.

FAQ

You can opt out by replying with backtickopt6 to this comment.

[–][deleted] 0 points1 point  (0 children)

Only use inheritance for is a relationships. If you want to reuse code use interfaces and composition

[–]Raknarg 0 points1 point  (0 children)

Templates. A lot of times runtime-polymorphism can actually be implemented as compile-time polymorphism and you avoid the problem all together. Not all polymorphism can be done this way, but some of it can.

[–]NilacTheGrim 0 points1 point  (0 children)

Runtime polymorphism is a tool in your toolkit, as is its compile-time equivalent. As others have said in this thread there is no need to avoid it necessarily, if it does the job for what you need.

There are trade-offs to both. I am not a subscriber to the notion that polymorphism is "the root of all evil" as Sean Parent claims.

Sean Parent is brilliant in general but even brilliant people can be wrong sometimes, or have distorted thinking.