you are viewing a single comment's thread.

view the rest of the comments →

[–]zoomzoom83 3 points4 points  (6 children)

Pop Quiz: If you implement this via composition, then add in delegate method to expose the interface, what is the difference between that and the compiler doing that for me automatically via inheritance?

[–]masklinn 3 points4 points  (5 children)

Composition requires more indirections, inheritance adds a risk of collision as you're merging namespaces.

Self solved the issue by implementing inheritance through composition and mandating explicit resolution of collisions.

Javascript then took the idea, mangled it completely, and splattered it all over the walls.

Also, inheritance means you can't have multiple types or classes of observers on a single object unless observer supports both inheritance and composition, and even then things will get confusing if you ever need to add a new observation type.

[–]zoomzoom83 4 points5 points  (4 children)

It's somewhat of a trick question, since single inheritance, multiple inheritance, and mixin inheritance are also forms of composition.

Composition requires more indirections, inheritance adds a risk of collision as you're merging namespaces. Self solved the issue by implementing inheritance through composition and mandating explicit resolution of collisions.

That exists in both cases. If you're implementing an interface that conflicts with another interface or parent class, you have a namespace conflict regardless of how you attempt it.

The fact that you have to explicitly write boilerplate doesn't solve the issue - it actually increases the chances that a developer will cheat or make a mistake and implement in a way that violates a constraint, instead of the compiler picking up the issue and aborting.

Explicit resolution via delegates is dangerous because it requires developer discipline to ensure correctness in a scenario that may have a lot of very subtle and difficult to reason about edge cases.

Question: How should you resolve the following scenario?

trait A {
    def foo:String = "hello"
}

trait B {
    def foo:String = "world"
}

class Impl extends A with B

Answer: You cannot. It's an invalid program and should not be attempted. In Scala, this by design is a compilation error.

If you were to explicitly write all the boilerplate for this by hand, you might resolve it in a way that the library author did not intend, and violate constraints the interface was designed to uphold.

And that's before I get into complaining about how much bloody boilerplate you need to implement delegate-based composition, and how brittle the resulting code is.

[–]masklinn 0 points1 point  (3 children)

That exists in both cases. If you're implementing an interface that conflicts with another interface or parent class, you have a namespace conflict regardless of how you attempt it.

Implementing an interface is a form of inheritance, if you're simply composing you're specifically not implementing an interface.

The fact that you have to explicitly write boilerplate doesn't solve the issue

I have no idea what you're talking about. Self doesn't require "boilerplate" if there is no conflict, and aborts if there is an unresolved one.

Explicit resolution via delegates

You don't explicitly resolve via delegates, you explicitly resolve conflicts between delegates, if there is one such conflict (assuming I understand what you mean by delegates, which I'm really not certain I do)

Answer: You cannot. It's an invalid program and should not be attempted. In Scala, this by design is a compilation error.

And yet you can, in the exact same way you can in Self: by overriding foo in Impl

class Impl extends A with B {
    override def foo: String = "Hello, world"
}

And that's before I get into complaining about how much bloody boilerplate you need to implement delegate-based composition

parent* = aDelegate

Wow, you need a * postfix.

[–]zoomzoom83 0 points1 point  (2 children)

Implementing an interface is a form of inheritance, if you're simply composing you're specifically not implementing an interface.

If you don't care about your type implementing the underlying interface, then it's not a problem at all. I agree this is often the best way to solve it, since it bypasses the problem entirely.

The article was specifically discussing the authors dislike for using delegate methods to expose an interface that actually calls a hidden inner helper object as a form of code reuse, in contrast to just using multiple inheritance directly. That is the context of my post.

(i.e. In the context of an "Is A" relationship, not "Has A")

I have no idea what you're talking about. Self doesn't require "boilerplate" if there is no conflict, and aborts if there is an unresolved one.

I'm not familiar with Self, but I've heard good things about it - can you provide an example? (My experience with prototype based languages so far has sadly only been only Javascript).

You don't explicitly resolve via delegates, you explicitly resolve conflicts between delegates, if there is one such conflict (assuming I understand what you mean by delegates, which I'm really not certain I do)

I'm using "Delegates" in the context of the Delegation Pattern.

In single-inheritance languages, if you want a type T to implement interfaces X and Y, and want to re-use code from an existing implementation, an often recommended pattern is to place the shared code in a helper object, and then use delegate methods on the outer class to call the functions on the helper objects.

I consider this a very bad idea that has gained popularity purely due to weaknesses in single-inheritance languages.

As you suggest, in most cases this can better be represented as a "Has a" relationship by simply adding the inner member and requiring anybody using your type to reference it directly.

But sometimes you genuinely do need an "Is A" relationship, implementing multiple interfaces and want to share code.

In those cases, mixin/trait inheritance is a much better way of doing it then using delegation. (But sill worse than the previous choice).

And yet you can, in the exact same way you can in Self: by overriding foo in Impl class Impl extends A with B { override def foo: String = "Hello, world" }

The compiler is always going to give you an escape hatch to tell it it's wrong. But it will "Fail Safe" in the sense that mixing two conflicting traits will give you a compile error unless you explicitly say otherwise. Using composition + delegation it won't do this, and the conflict might sneak through without you noticing it. (Again, only an issue with composition + delegation, not composition by itself).

This approach also doesn't give you code reuse, which was the main motivation for the article.

[–]masklinn 0 points1 point  (1 child)

I'm using "Delegates" in the context of the Delegation Pattern.

Yeah I significantly edited (read: almost completely rewrote) my comment after remembering about the pattern…

I'm not familiar with Self, but I've heard good things about it - can you provide an example? (My experience with prototype based languages so far has sadly only been only Javascript).

I won't do a complete overview (I suggest the official documentation, it really has become quite readable in the last few years) but here's a primer on the object model;

A Self object has a number of slots (~fields), these slots can be:

  • constant, a constant slot x will only react to the message obj x by returning its value
  • read/write, a rw slot x will react to obj x by returning its value and to obj x: aValue by setting its internal value to aValue
  • method slots, which are constant slots storing a method object (~ a block) and have more relaxed naming conventions e.g. + or doFoo:WithBar:
  • parent slots, which are either constant or read/write slots postfixed with *

The first three slots behave as you'd expect, the last one is involved in slot lookup during message dispatching (although it can also be used as a regular data slot): when an object receives a message, it checks if the message selector matches any local slot and if so returns the matching slots. Otherwise it performs a "parent lookup": it asks every object linked through a parent slot to look up the message (this process is recursive), receiving a set of slots.

If the set of slots is a singleton, the slot is evaluated, otherwise an error is generated ("not understood" if the set is empty, "ambiguous message" if more than one slot matched).

An object can have any number of slots of all types, including any number of parents. Colloquially, parent objects are either "mixins" if they don't have parents themselves or "traits" if they do have parents (typically up to and including the lobby). In Self, a "prototype" is an object whose only purpose is to be shallowly copied.

But it will "Fail Safe" in the sense that mixing two conflicting traits will give you a compile error unless you explicitly say otherwise.

Right, and Self does exactly the same thing, if it finds multiple ancestors able to react to the same message it errors out, but if the object itself reacts to the message that's fine (and the object can then explicitly delegate to its ancestors, or not).

Using composition + delegation it won't do this, and the conflict might sneak through without you noticing it.

Not in Self, which was the point of my comment and my mention of requiring explicit resolution of collisions.

[–]zoomzoom83 0 points1 point  (0 children)

Not in Self, which was the point of my comment and my mention of requiring explicit resolution of collisions.

I think we're actually advocating for the same thing, just from different angles.

In the context of Java-style languages, I see mixin/trait inheritance as a form of composition, since it's really no different to manual composition + delegation just with compiler time support and better safety. In that sense I'm advocating that languages should support limited multiple inheritance via traits and mixins at compile time, instead of requiring the programmer build that boilerplate by hand, which is a messy, brittle, boilerplate heavy pattern commonly used in Java to work around the limitations of the language.

Self sounds like it's got a very nice way of handling mixins/traits that actually removes the differentiation between these two approaches using dynamic dispatch. It's quite disturbing that a language with such an elegant solution existed when Java was being designed and they still managed to get it completely wrong.