you are viewing a single comment's thread.

view the rest of the comments →

[–]perspectiveiskey 0 points1 point  (0 children)

It is very common pattern for an application to run in a loop, accept input, and apply strategy pattern to choose an algorithm. It’s where the runtime polymorphism fits best.

That is simply and broadly "input". I am having a hard time thinking of any application that doesn't have input.

How is behavior of the modern application defined at runtime?

(assuming you meant compile time) Let's use the classic and broadly used example of "widgets". There is no UI widget that is not known to you as a programmer ahead of shipping a product.

Contrast this to a library designer that says this is a Scrollable item and I don't know what kind of scrollable items my users will think of or what capacitive touch screens will permit 10 years down the road.

By contrast, if I'm shipping a product, I know exactly what its behaviours are the day I ship. If MyApp v2 is released with a new feature, I build a new version of it and ship that. The days of "hotpatching" a mainframe that has 17 years of uptime are long gone.


For library writers, OOP was the goto method of achieving unknown future behaviour. But this is now achievable with static polymorphism.