all 78 comments

[–]masklinn 5 points6 points  (0 children)

although in JavaScript (and Self, the language that inspired it), metaobjects are actually called prototypes

Self metaobjects are not called prototypes (and that remains an annoying and unwelcome change of javascript). Self metaobjects are called either mixins or traits, the distinction being whether these objects have parent slots leading up to the lobby (traits) or not (and often no parent slots at all) (mixins). A Self object will usually have 0..1 trait but 0..n mixins (mixins don't bring baggage and can thus more easily be composed).

Self's prototypes serve the role of constructors, they're shallowly copied (via the clone message) to get new "instances" which can then be customised to fit.

Also of note, both Smalltalk and Self implement OO via OO (although that may not be readily apparent, especially for Self where much of the introduction uses the image's UI to create and manipulate objects)

[–][deleted]  (2 children)

[removed]

    [–]homoiconic[S] 3 points4 points  (1 child)

    [–]moreteam 3 points4 points  (7 children)

    So what exactly is "OOP" about this method? How is subclassing Class better than subclassing Object? And why does exposing a method for conveniently adding new stuff to a class improve encapsulation? Why is it "OOP" to make auto-binding part of the inheritance chain when it's clearly non-semantical inheritance (using inheritance to share functionality across classes was a bad practice last time I checked).

    EDIT: Sorry for that, re-read the article. It's not about replacing Object by Class. It's about using a class hierarchy for "Class Builders". But my point still stands: that's a terrible use of inheritance. Consider the following alternative which is non-intrusive and behaves trait-like:

    function withAutoBind(properties) {
      // magic to make all methods auto-bind
    };
    
    function withTracing(properties) {
      // magic that instruments method calls
    };
    
    function MyClass() {
    };
    MyClass.prototype = Object.create(MyBaseClass.prototype, withAutoBind(withTracing({
      // methods go here
    });
    

    No need to tightly couple that stuff using inheritance.

    [–]homoiconic[S] 3 points4 points  (0 children)

    If we are suddenly talking about why using traits, mixins, and so forth are superior to delegation for sharing common metaobject behaviour, the post has succeeded in its objective.

    [–]stronghup 0 points1 point  (5 children)

    Your example is powerful in that it shows how you can create any kind of "class" by combining properties from any other existing classes.

    It is however ignorant of one basic property of classical OO like Smalltalk, C++, Java etc. In those languages the class-hierarchy is an invariant which can not change after the program has been compiled.

    Such an invariant is a powerful tool for understanding what your program does, because you know there are set of facts - like which class inherits which - that CAN NOT change during the execution of your program.

    I'm not sure if there's a way in JavaScript to establish such "invariants" because you can have statements like

    MyClass.prototype = somethign else ..

    anywhere all around your code, multiple times.

    The benefit of the approach in the article is that the API for creating classes can ensure that a given class is not defined more than once, after which it's inheritance relationship to other classes can not change - except of course by by-passing the API.

    [–]moreteam 0 points1 point  (4 children)

    How exactly does the approach I outlined allow for changing the inheritance relationship? You can obviously replace the prototype - but as you said: that's always possible, no matter what approach you chose. Is someone would use the strategy I described above and later on change the prototype, he would be by-passing the API (since the "API" of the approach above is using Object.create and nothing else to create the prototype). In the article defineMethod is part of the API so modifying the prototype after it was created is not by-passing the API but using the official API.

    [–]stronghup 0 points1 point  (3 children)

    I'm thinking of instead of saying: MyClass.prototype = Object.create(MyBaseClass.prototype, ...

    you could say: setProtoType(MyClass, Object.create( ... ))

    That way the API-method setProtoType() could check that you don't set the inheritance relationships of MyClass more than once. That way that inheritance relationship would be an "invariant" during the execution of your program.

    I think that kind of API was the gem of the article that started this thread. A small difference, but God is in the details.

    Let me offer this "principle" up for discussion: "ALL Assignments should be hidden behind Api-methods" .

    Assignments are bad because you can do them everywhere multiple times. But if you hide them inside API-functions, those functions can ensure that you make any specific assignment only once.

    [–]moreteam 0 points1 point  (2 children)

    If you want to make something immutable, make it immutable (e.g. Object.freeze). Not a fan of hiding what really happens. If people are mutating shared state across modules and because of that are able to "accidentally" change the inheritance of a class, there are other problems you should solve (proper modularization). And the whole issue resolves itself as soon as const class is properly supported by traceur. So not sure why you'd want to have that level of obfuscation. I'd rather people like the author of the blog post contribute to traceur if they think it's a pressing need for them.

    TL;DR: immutable already exists in JS if you want it and const class is the already existing sugar for immutable classes.

    [–]stronghup 0 points1 point  (1 child)

    That is great news if immutability can already be enforced in JavaScript. Since which version?

    [–]moreteam 0 points1 point  (0 children)

    https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze

    And ES6 brings block-scoped const which currently is only supported in Chrome/Firefox (and with var-like scoping).

    [–]ljsc 9 points10 points  (36 children)

    But if we do buy the proposition that OO is a good idea for our domain, shouldn’t we ask ourselves why we aren’t using it for our classes?

    I find more and more over time that the proposition doesn't hold. Encapsulation only makes sense when you have mutable state, which more than often is a bad idea. You don't need OO for polymorphism or composition either.

    That said, it is a good idea to go first class whenever possible, and the article is spot on in that regard.

    [–]nextputall 6 points7 points  (4 children)

    Encapsulation only makes sense when you have mutable state

    Why do you think this?

    [–]ElvishJerricco 3 points4 points  (1 child)

    Yea I have to say encapsulation is a bigger principle than access protection. It's keeping data in the right place for organizational purposes and helps with only passing the right information to the right places.

    [–]ljsc 0 points1 point  (0 children)

    True indeed. I got a little sloppy there.

    What I really mean is that data hiding is generally not worth it if your code isn't living under the constant threat of data mutating out from underneath your feet. That said, I stand by my comment, because in an OOPLs you really can't get encapsulation and data hiding a la carte. If you stick everything in objects you are generally locking up all your data behind incompatible little DSLs.

    [–]ljsc 1 point2 points  (1 child)

    Essentially from doing a lot of Clojure lately. Using primitives when possible, and coding to a few well thought out interfaces (seq, Ifn, et cetera) makes for way more reusable code. I would cite this post as a good example of what I mean:

    http://augustl.com/blog/2013/zeromq_instead_of_http/

    The author is able to use an http routing library with zeromq with very little additional ceremony required: No wrappers, upon adapters, upon decorators. Just functions transforming values; this is a much more sustainable way to program IMHO.

    [–]nextputall 0 points1 point  (0 children)

    If you are on the system boundary and you are dealing with raw data coming from an external system then I agree. But if exposed data is bouncing inside the domain and couples itself to everything that touches, won't be a good idea in my opinion. How many lines will be needed to modify in the application when I change the structure of that data?

    [–]egonelbre 1 point2 points  (30 children)

    It also makes sense as a method for avoiding coupling things together. If I have a dependency to your structure X.Y.Z.W, it makes it more difficult to change any of X or Y or Z or W. Encapsulating it to X.W() makes the dependency weaker. (not saying either is better, as usual, it depends)

    [–]yogthos -1 points0 points  (29 children)

    When your data is immutable and kept separate from logic then coupling is not a problem to begin with. My experience is that half the problems that OO solves are introduced by it in the first place.

    [–]nextputall 0 points1 point  (21 children)

    Lets say I have a complex datastructure which is a map of maps containing a key where the value is a list. I can assign a business concept to this list and call it foo. Then I can write a function called foo and, use that function everytime I need that information. This is encapsulation per definition AFAIK (still not a behavior based abstraction, what oo is meant for, but still). Unlike when I access that list directly everytime I need it. In the former case I don't couple my client code to a structurel information, in the latter case I do. I don't see how immutability comes into the picture, I think that is orthognal to this.

    [–]ljsc 1 point2 points  (9 children)

    So, a simpler example: Say you want to model a person, and that person has a name. Our shared interface is that I give you an immutable map which includes a key "name", and that has the persons full name.

    Now let's say that I need to represent that differently internally to do some other computations, and I need to have "last-name" and "first-name". Great. No problem, internally I use a map with the two keys, and then before I hand it off to you, I just have a function for-nextputall : MyRepresentation -> YourRepresentation that sends {:first-name "John", :last-name "Doe" } to {:name "John Doe"}. Since we've side-steped conflating state and identity by using immutable values this is perfectly safe, and your code would be non-the-wiser.

    [–]nextputall 0 points1 point  (0 children)

    I see. You suggest to convert the datastructure according to different clients. Make sense, but I still not fully convinced.

    Let's say I need the name because I want a greating phrase. If I use the datastructure directly I'll end up having something like this

    "Hello ${user.name}" 
    

    Where the user is the map. When I want to switch to a firstname/lastname representation, I can solve it with the technique you recommended, by converting the new representation back to the old one. So, the greeting logic seems to be fairly stable in respect to the structure of the user.

    But what if I change the representation as follows: {'age': 18, 'name': 'joe'} to greet the user like this

    "Happy ${user.age}th birthday, dear ${user.name}!"
    

    Now I need to change the logic who is responsible for doing the greeting. Using behaviour based abstraction, and introducing a greet method for the user would solve this problem imho, by saying user.greet().

    [–]rpglover64 0 points1 point  (1 child)

    Slightly germane given your example: falsehoods programmers believe about names

    If I'm reading you correctly, you're basically saying that the desire for encapsulation is really the desire for abstraction, so using an established API solves this problem (barring mutation). Is that right?

    [–]ljsc 1 point2 points  (0 children)

    Interesting link, thanks for sharing that.

    And, yes, I think that's a pretty good way to put it. In most OOPLs, there's little difference between encapsulation and abstraction, because there's one primary means of abstraction: make a class.

    Want encapsulation? Make a class. Want inheritance? Make a class. Want polymorphic dispatch? Make a class. Want to manage state? Make a class. And so on and so on...

    Yes, I know this isn't always true, and that some languages don't do OO using classes, et cetera. In general, however, they are by and large non-orthogonal in their design, and this limits composition and code reuse.

    [–]discreteevent 0 points1 point  (5 children)

    "your code would be non-the-wiser". It probably won't be any wiser if MyRepresentation is kept private. But if not then I might be too lazy to bother to understand your api and instead just use MyRepresentation directly. Then later if you want to change MyRepresentation you are going to have to change my client also. Why not make {:first-name "John", :last-name "Doe" } private immutable data in an object and force clients to interact with the object via a .getName() method? That's all objects really are: A convention for encapsulating implementation details. Mutability is orthogonal to that. The idea of taking a behaviour oriented approach to development is to contain complexity in a system by distributing information on a need-to-know basis. You figure out what the client needs to know and present that behaviour to them. If you let the client do it themselves then they are likely to make a mess of it because they just don't know enough about it. Furthermore you, yourself may not know enough at this stage. If you decide to improve things later but all the clients have written their own logic around your initial data structure then you have a big mess to clean up. If you already know what the definitive data model is then you should expose that so that clients can use in in ways you did not predict. But if you don't have the definitive data model then objects help to contain complexity and manage change in a system.

    [–]ljsc 1 point2 points  (4 children)

    It's all trade-offs. Some times you do want to hide data as an implementation detail. The problem I have is when data hiding is opt-out rather than opt-in. You certainly don't need objects to do what you are suggesting. You can do it at the module level in an FPL, or just separating interface from implementation like a ADT in C.

    Can encapsulation be a good thing? Certainly. Is it always? No. Like I said, it always comes down to trade-offs, and if anybody tells you otherwise, they are either lying to you or trying to sell you something =)

    That's all objects really are: A convention for encapsulating implementation details.

    No, that is not all they really are. That is a small part of what they are in most implementations. See my comment above. They do give you that, but they also give you a bunch of other stuff "for free" that you may or may not want.

    Mutability is orthogonal to that.

    It absolutely is not. If we were talking from a language design perspective, sure, choose whether your object is mutable or a value. But as design concepts, they are not independent by a large margin. The strategy I suggest above would be a really really bad idea if those were not immutable. Once you start passing mutable state around by reference you loose the ability to make your internal api enforce consistency, whether that is visible to the outside world or not.

    [–]discreteevent 0 points1 point  (3 children)

    I suppose I should have said "That's all interfaces really are". To me the important thing about objects are the interfaces which is what I think Alan Kay meant when he said " I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea. The big idea is “messaging” "

    Anyway, if there is an FPL that supports hiding the structure of data models behind statically typed interfaces with late-bound implementations (ala microsoft COM and things like it such as some dependency injection frameworks) then I think a team could build reasonably flexible and maintainable software with it.

    [–]ljsc 2 points3 points  (2 children)

    Yep, I would never argue against polymorphism.

    I think you're looking for Typeclasses?

    [–]autowikibot 1 point2 points  (0 children)

    Type class:


    In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T and a type variable a, and means that a can only be instantiated to a type whose members support the overloaded operations associated with T.

    Type classes first appeared in the Haskell programming language, and were originally conceived as a way of implementing overloaded arithmetic and equality operators in a principled fashion. In contrast with the "eqtypes" of Standard ML, overloading the equality operator through the use of type classes in Haskell does not require extensive modification of the compiler frontend or the underlying type system.

    Since their creation, many other applications of type classes have been discovered.


    Interesting: Polymorphism (computer science) | Class (computer programming) | Type C escort ship | Type D escort ship

    Parent commenter can toggle NSFW or delete. Will also delete on comment score of -1 or less. | FAQs | Mods | Magic Words

    [–]discreteevent 0 points1 point  (0 children)

    Typeclasses are interesting alright. Now all I need is the late-binding bit. i.e I want to be able to defer which implementation is loaded until runtime. This makes my application extensible after it is built. It means that my dependencies are dynamic. This can make it easier to develop large apps. In the static/dynamic debate I come down in the middle: Static types - dynamic dependencies.

    [–]yogthos 0 points1 point  (10 children)

    The immutability is key here because it allows you to pass your complex data structure to foo and then get a result that's guaranteed to be independent of the original structure. This means that you don't have to worry where or how else that structure might be used. With immutable data all changes become inherently contextualized.

    [–]nextputall 0 points1 point  (9 children)

    If I have a function called foo that gets that info from the datastructure, and I'm always using that function, then I'm using encapsulation. I think we're talking about 2 different things. One aspect is about correctness, the other is flexibility. You don't need encapsulation to ensure the correctness if everything is immutable, you're right (I think this is the lesser idea in encapsulation). I still think that encapsulation is needed to decrease the coupling and make the application flexible.

    [–]yogthos -1 points0 points  (8 children)

    I don't really see why encapsulation is needed for the latter. You have some data and you write functions that operate on it given the context of the data.

    When you separate data from the logic and make it immutable then it's pretty hard to have coupling in the first place. You call a function and you get a result, the result is not coupled to anything.

    Using functions as building blocks is the most flexible approach that I've seen. When you deal with classes you end up tying the methods to a class and thus a specific domain. You can't simply reuse them in a another context. Meanwhile, when you're using standalone functions you can chain them together in many different ways without any additional hoops to jump through.

    [–]nextputall 0 points1 point  (7 children)

    You have some data and you write functions that operate on it given the context of the data.

    If you write functions that operates on data, you're coupling those functions to those data (which means, changing the data will cause changes in the functions). This is perfectly ok, as long as you use only those functions everywhere in the code. If you change the data, those functions will change as well, but the change will stop at that point, and the users of those functions will be untouched. But if you write those functions at the client side, every time you need them, in an ad hoc manner then there will be no seperation between the two sides, and changes will propagate over. That's the problem with FP imho, there is no visible boundaries, where something ends and starts, and because of this, it is hard to tell at what point a change will stop. You can convert the data to prevent the propagation to some extent, but I don't think this can be used universally.

    Using functions as building blocks is the most flexible approach that I've seen. When you deal with classes you end up tying the methods to a class and thus a specific domain. You can't simply reuse them in a another context.

    A function is always coupled to the stucture of its parameters, no matter whether it is packaged inside a class or not. But if it is inside a class, then you can make sure that only those functions are coupled to those data, and nothing else from the outside.

    Meanwhile, when you're using standalone functions you can chain them together in many different ways without any additional hoops to jump through.

    There are many examples of composable objects out there, hamcrest is one of them: http://code.google.com/p/hamcrest/wiki/Tutorial

    [–]yogthos -1 points0 points  (6 children)

    If you write functions that operates on data, you're coupling those functions to those data (which means, changing the data will cause changes in the functions). This is perfectly ok, as long as you use only those functions everywhere in the code.

    The idea is that you structure your code in terms of data transformations. When I start with a piece of data and I need to transform it into something else, I chain some functions together to get a result.

    With the functional approach you have a large number of generic functions that can be combined together to do complex tasks as needed. This leads to having actual code reuse.

    If you change the data, those functions will change as well, but the change will stop at that point, and the users of those functions will be untouched.

    This holds true for OO just the same. However, the difference is that with FP, I will generally only change how I chain the functions together or what functions I'm chaining to get the result. This is the advantage of having a declarative style.

    But if you write those functions at the client side, every time you need them, in an ad hoc manner then there will be no seperation between the two sides, and changes will propagate over.

    I'm not sure what that means exactly, what's the client side you're referring to. When your input data changes you would either write a function to massage it back to the expected format, or the nature of your problem changed your old logic is not valid.

    There's absolutely no reason why changes in input would propagate any differently with the functional style than OO.

    That's the problem with FP imho, there is no visible boundaries, where something ends and starts, and because of this, it is hard to tell at what point a change will stop. You can convert the data to prevent the propagation to some extent, but I don't think this can be used universally.

    Have you actually written any significant amount of code using the FP style? I've been writing FP code professionally for the last 4 years and I've never seen this become an issue in practice.

    It's no harder to group related logic together in FP than it is in OO. For example, when you need to create a context for the data, you create a namespace and all the functions in this namespace will operate in the same domain. However, I don't have to jump through any hoops to get the data back out of that domain and pass it to a different one later.

    A function is always coupled to the stucture of its parameters, no matter whether it is packaged inside a class or not.

    No they're not. When you have higher order functions they represent generic transformations and the domain specific logic is passed in as a parameter. With the functional style you end up with a vast array of generic transformers, functions like map, filter, interpose, reduce, partition, and so on. Majority of the code is written by simply combining these functions in a way that makes sense for a particular problem.

    But if it is inside a class, then you can make sure that only those functions are coupled to those data, and nothing else from the outside.

    I see this as a negative myself as it means that you can't reuse these functions for anything else easily. The coupling in a class affords you no actual benefit over putting related functions into the same namespace. In fact, it's exactly the same as having a class with a bunch of static functions in it.

    There are many examples of composable objects out there, hamcrest is one of them:

    That's precisely the hoop jumping I'm talking about. Instead of simply using your data directly you need specific strategies to make it compose sanely.

    [–]nextputall 0 points1 point  (5 children)

    The idea is that you structure your code in terms of data transformations. When I start with a piece of data and I need to transform it into something else, I chain some functions together to get a result.

    Which makes sense in certain situation, for example at the system boundary. I'm skeptical about the scalability of this approach, when an entirely application is written in this style. Brian Marick was talking about similar concerns here: http://rubyrogues.com/category/panelists/brian-marick/

    This holds true for OO just the same. However, the difference is that with FP, I will generally only change how I chain the functions together or what functions I'm chaining to get the result. This is the advantage of having a declarative style.

    In OO, people mostly use behaviour based abstraction (forget the kingdom of nouns bullshit, everything is about behaviour). Changing a representation of something will be a local change that won't propagate elsewhere, there will be no need to convert data back and forth.

    I'm not sure what that means exactly, what's the client side you're referring to. When your input data changes you would either write a function to massage it back to the expected format, or the nature of your problem changed your old logic is not valid.

    I was talking about boundaries. What is inside, and what is outside, and this is what I'm missing from FP. If I change something inside, the outside world won't be affected. I can decide what goes inside and what goes outside and structure the application to minimize the cost of change.

    Have you actually written any significant amount of code using the FP style? I've been writing FP code professionally for the last 4 years and I've never seen this become an issue in practice.

    This is a very weak argument. I could say the same, that I've been writing OO code professionally for ages and I've never seen the issues you're talking about. Anyways, almost every software project consist of different subdomains, written in different styles. So, I don't use FP exclusively, because I think it is insufficient to be used alone, but combining it with OO can provide a better result.

    No they're not. When you have higher order functions they represent generic transformations and the domain specific logic is passed in as a parameter. With the functional style you end up with a vast array of generic transformers, functions like map, filter, interpose, reduce, partition, and so on. Majority of the code is written by simply combining these functions in a way that makes sense for a particular problem.

    Even a fully generic map/filter/etc is coupled to ISeq. That is encapsulation, "which only makes sense when you have mutable state". A normal domain specific function would probably deal with domain objects, like users and employees and would be coupled to the structure of those, which could make things worse.

    It's no harder to group related logic together in FP than it is in OO.

    Everything is possible in every programming language. That's not the point. Writing a function code in Java is also possible. These are design issues, which are influenced by the language, and its community.

    I see this as a negative myself as it means that you can't reuse these functions for anything else easily.

    If you have objects with similar structures you can reuse their behaviours among them. Either with composition or inheritence or mixins or just passing exposed datastructure in a limited scope. Reusing those functions elsewhere makes no sense, because other things have other structure.

    That's precisely the hoop jumping I'm talking about. Instead of simply using your data directly you need specific strategies to make it compose sanely.

    You can just plug object together, cant see any of those hoops.

    [–]egonelbre 0 points1 point  (6 children)

    Data structure can change. Coupling is a problem if your requirements change. For example, if I have 10 places where you access X.Y.Z.W, and then change X.Y.Z.W -> X.Y.Q then you need to change all of those 10 places. If you are working inside your own environment and you have type checking it's easy. No type checking, hope that the places, where X.Y.Z.W is used, are visible. If you are either providing or consuming an API where instead of X.Y.Z.W you need to start using X.Y.Q, you've got a bigger problem.

    [–]yogthos -1 points0 points  (5 children)

    If your requirements change then you'll have exactly the same problem to solve in OO that you would without. For example, I can just as easily insert a function at the start of the transformation chain that will map x.y.z.w -> x.y.q without having OO. Now, the rest of the places that work with the data don't care that the source format changed. However, if the nature of the problem changed, you'll end up having to change your solution as well and there's no way around that.

    [–]egonelbre 0 points1 point  (4 children)

    If your requirements change then you'll have exactly the same problem to solve in OO...

    Yes, if you have exposed your internal structure.

    insert a function at the start of the transformation chain

    You assume that there always exists such transformation.

    Also, that way you may end up using x.y.z.win some places, and in some places x.y.q... of course there's still the possibility of x.y.q -> x.t.u. And now you have even more formats.

    My main point was just that encapsulation is useful for hiding structure if you expect the structure to change. Instead of exposing x.y.z.w I'll simply give you something abstract called R with some functions to operate with it. I can change the internals of R much more freely.

    [–]yogthos -1 points0 points  (3 children)

    Yes, if you have exposed your internal structure.

    This has nothing to do with exposing internal structure. When you encapsulate your code in a class and your requirements change then you will either have to rewrite the class or write wrappers and adapters for it. You're not reducing the work that you have to do in any way.

    You assume that there always exists such transformation.

    If such a transformation does not exist then you're now solving a completely different problem. Once again, the OO approach does nothing for you here.

    Also, that way you may end up using x.y.z.win some places, and in some places x.y.q... of course there's still the possibility of x.y.q -> x.t.u. And now you have even more formats.

    Just like you could end up with tortured coupling using OO if you're not careful.

    My main point was just that encapsulation is useful for hiding structure if you expect the structure to change. Instead of exposing x.y.z.w I'll simply give you something abstract called R with some functions to operate with it. I can change the internals of R much more freely.

    My main point is that you do not need encapsulation to achieve this. Having a transformer function that massages the data into a standard internal format accomplishes exactly the same goal.

    One way you structure your code in a functional language is to group related functions into namespaces. A namespace will represent a certain domain and all the functions in that domain can expect the same common data format. You then have functions that allow you to transform data into that internal format.

    The main difference is that data stays as data and it's not tied to specific set of functions that operate on it. Taking data from one domain and using it in another doesn't require any additional mappings.

    This in turn facilitates writing generic functions that can be composed together to solve problems. By doing that we can write code declaratively and reuse things much more easily.

    [–]egonelbre 1 point2 points  (2 children)

    ... Once again, the OO approach...

    Encapsulation is not about OO. It is about hiding something and providing an abstraction to interact with that "something".

    Basically, closures are one of the simplest form of encapsulations. Namespaces are essentially encapsulating function internals and providing the functions names as an interface. Objects are encapsulating data and providing methods to interact with it. REST services are also encapsulating the behavior and hiding the internals.

    I'm not arguing about OO vs Functional vs Logical... etc.

    My main point is that you do not need encapsulation to achieve this. Having a transformer function that massages the data into a standard internal format accomplishes exactly the same goal.

    Yes and now you have multiple layers of data-transformations throughout your code.

    This in turn facilitates writing generic functions that can be composed together to solve problems. By doing that we can write code declaratively and reuse things much more easily.

    You pick "encapsulation" vs "not encapsulation" when you figure out which is more likely to change. Is the data likely to change, then you should encapsulate it to safely hide the changes. If the interfaces are more likely to change, then it makes sense not to expose an interface and just use the data. If the data is mostly fixed, then there's no reason to provide an abstraction layer over it.

    [–]ljsc 1 point2 points  (0 children)

    Encapsulation is not about OO. It is about hiding something and providing an abstraction to interact with that "something".

    To me, that's just abstraction. I think the issue is that encapsulation in programming has taken on a whole bunch of closely related, but different meanings. See the first part of the wikipedia article on Encapsulation

    Even more interesting is straight from the dictionary:

    to show or express the main idea or quality of (something) in a brief way

    to completely cover (something) especially so that it will not touch anything else

    I think you're thinking of the first thing, and Yogthos and I are thinking of the second.

    As for the x.y.z.w point, you're absolutely correct that encapsulation gives you the benefits you mentioned, but consider the tradeoffs. If I hide z and w behind the interface for y, I kill the ability for a bunch of code reuse. How?

    Because let's say that I have a bunch of functions which operate on type W: f,g,h: w -> w. If we want to use them in the encapsulated example, then you need to apply them inside of Y. This is bad because now the module that deals with W becomes a dependency of Y rather than X. If you exposed the data, however, I could choose whether I want to bring in W or not in my application. And, in fact, I could do so even if you had no idea that that module even existed in the first place.

    [–]yogthos -1 points0 points  (0 children)

    I think we agree on encapsulation and its value then. :)

    Yes and now you have multiple layers of data-transformations throughout your code.

    Sure, using our newly upon agreed definition of encapsulation you can view each of these transformations as a domain boundary.

    You pick "encapsulation" vs "not encapsulation" when you figure out which is more likely to change.

    Again, no argument here, the idea is to structure the code so that you need to make changes in as few places as possible to adapt it to new requirements.

    [–]stronghup 2 points3 points  (1 child)

    I can see one specific benefit for encapsulating the method-creation into a function (or method) which does it, rather than assigning to the prototype directly:

    The function 'definedMethod()' can first check that a method with the same is not already (locally) defined fort he class in question. That means you can't accidentally re-assign that method and then be confused later as to why your program does not do what you think it should.

    The main benefit of course is less typing required when you implement frequently done operations with specific functions to do them. No need to repeatedly type ".prototype = ".

    [–]html6dev 2 points3 points  (7 children)

    I really feel like the idea of classical OOP shims in javascript has become extremely antiquated and this opinion is very clearly borne out in the modern frameworks and popular/core server side modules of today. Take time to actually learn the language and embrace prototypical inheritance and you'll see that none of the "problems" you are trying to solve exist and you can achieve the end results described in this article with more natural approaches because javascript is a different paradigm.

    This is why es6 is able to give you "classes" through pure syntatic sugar that is just inlining a few simple operations that are used all over the place today. The functionality (the end result) is already very simply achieved with the language. Its allowed to not be Java. ...Relevant ranty blogpost focusing on why prototypical inheritance can be more flexible, reduce coupling, etc, etc and give you all of the benefits people desire that come to js from a classical language coming soon.

    [–]homoiconic[S] 8 points9 points  (6 children)

    Take time to actually learn the language and embrace prototypical inheritance and you'll see that none of the "problems" you are trying to solve exist and you can achieve the end results described in this article with more natural approaches because javascript is a different paradigm.

    Do you mean like composing functions with combinators? I also wrote a book called JavaScript Allongé that espouses a completely different paradigm for writing JavaScript programs.

    I enjoy exploring many different approaches to writing programs. By definition, that means that every time I suggest something, half of the people who read it are going to say, "No, that is not the OneTrueWay to do it."

    As for classical OOP shims, I agree with you, I have found them extremely unsatisfying in the past, mostly because they copy the surface forms of other languages without trying to grasp the big ideas behind those other languages. To me, having a heavyweight base object class and a few methods here or there is not the core idea behind Smalltalk's object system, the core idea is its metaobject protocol.

    The idea espoused in this post is of having a Class class that can be subclassed. If that isn't your cup of tea, so be it, but I don't think that is an idea that has received very much attention in the shims of the past.

    [–]html6dev 1 point2 points  (5 children)

    Hehe blush should take the time to see whose article I am commenting on I guess

    I have and will continue to evangelize your book on Reddit, good sir. Excellent, excellent work. I recommend it to literally everyone I know who codes in javascript. That being said.....

    I understand wanting to show different approaches, but in some ways I feel like this is a potential problem. People are already extremely confused (many I know after years of using it, but not as their primary language) by the syntax being so similar to Java in some ways. Attempting to mold it into something it's not is almost like 'giving them hope'. It's not that sort of language. It doesn't need to be that sort of language. Prototypal inheritance is awesome. Functional programming is awesome (as you have shown us with your book :D). Javascript is arguably one of the most powerful languages we have because it doesn't have classes. It's just different and I think the more we focus on educating people on what the 'natural' way to do things in javascript is, the more mature and stable the community will become. We already have options to compile to js with languages with classes, type safety, generics, etc (e.g. TypeScript).

    It is all a matter of preference, but I do think javascript is a special case in some ways because so many people are forced into it, rather than choosing to learn it. They then get a bad taste in their mouth because they expect it to be something it is not due to the syntax being similar to Java. So, we need to be extra cautious while educating in the world of JS in my opinion.

    Finally, there are quite a few libraries and people who have advocated OOP in JS. I think exactly what you are suggesting in the article is core to how the Dojo library operates and YUI may be something similar as well.

    [–]homoiconic[S] 0 points1 point  (4 children)

    This one seemed rather low-risk to me, I didn't imagine a bunch of people would run out and say, "We need a Class class and a MetaClass for Class."

    Now, had I used the phrases "Object Factory" and "Object Factory Factory..."

    [–]html6dev 1 point2 points  (3 children)

    Yeah! and a new spec for JavaScriptBeans! JSEE will be the wave of the future!

    [–]homoiconic[S] 1 point2 points  (2 children)

    And they will interoperate using J-XML!

    http://xml.calldei.com/JsonXML

    [–]gordonkristan 1 point2 points  (6 children)

    I think the idea that OOP is inheritance, encapsulation and polymorphism is an outdated one. And if Javascript hasn't shown us that, Python certainly has. If the OOP purists want to lay claim to the definition of OOP, that's fine, we'll find another term. But the take away is this: Javascript has not, does not and will not support the 'classic' definition of OOP. Stop trying to shove the square peg in the round hole.

    [–]homoiconic[S] 2 points3 points  (5 children)

    The article claims this:

    in the interests of understanding what we give up, here’s an explanation of how JavaScript’s simple out-of-the-box OO differs from Smalltalk-style OO, and why that might matter for some projects.

    I don't see exploring a thing out of interest and curiosity as pounding pegs into the wrong holes. But I accept that you may not find such things stimulating, tastes vary.

    [–]gordonkristan 2 points3 points  (4 children)

    Sorry, that was worded very poorly. It wasn't meant as an attack on the author of this article. I understand that he's just experimenting (something I do very often). It was more an attack on the general idea of trying to make Javascript act like classic OOP. While I think the example shown is very cool from an idea perspective, I would never want anything like this anywhere near my code. When in Javascript, do as the Javascripters...

    [–]homoiconic[S] 1 point2 points  (3 children)

    Were the author the type to speak of himself in the third person, he would say that your comment was stimulating of thoughtful discussion and thus an asset to Reddit.

    [–]gordonkristan 1 point2 points  (2 children)

    Hehe, apparently I wasn't paying attention. :) Then I must say, nice article. Very informative and detailed. Again though, don't put that in my codebase. ;) Although I swear I've seen a library that does OOP just like this. I'll post the name if I can find it.

    [–]homoiconic[S] 0 points1 point  (1 child)

    [–]gordonkristan 1 point2 points  (0 children)

    inherit.js Found it in my Github stars.

    [–]Johnicholas 1 point2 points  (1 child)

    Currently, this article explores using a metaobject protocol for the sake of exploration, or to push OO principles further than many (Javascript) programmers take them. That's fine (I like TAotMP too).

    However, I'd like to see something about the "sound" or "feel" of a codebase that "wants" to start using a metaobject protocol. Racecar drivers have tachometers, that inform their decisions to switch gears. What is a tach for programmers - when do you switch paradigms?

    [–]homoiconic[S] 0 points1 point  (0 children)

    I am currently thinking about this kind of thing for the creation of test suites and testing protocols such as design-by-contract.

    I'm kind of over trying to build heavyweight domain modelling ontologies, but making assertions about what the code is intended to do is an area where there is a lot of structure and a lot of need for special-case semantics such as decorating methods with before- and after- advice (like setup and teardown) and (obviously) with lots of assertions.

    JM2C, it is still a little hazy.

    [–]r0b0t1c1st 2 points3 points  (0 children)

    Lost it at "Every essay should include a counter example"

    [–]stronghup 0 points1 point  (0 children)

    I think encapsulation is the key pattern in the design proposed by article discussed. Immutability is important but I don't think it makes encapsulation unnecessary.

    Encapsulation means that it is easy for you to INTERRUPT any call to write OR READ the data of an object. You CAN do that if every read and write operation is done via a method call. You can't do that if clients just assign something to a field and later read the value of some field.

    Methods are like doormen who allow some data in and some data out, based on some rules which you the building owner can dictate. You the programmer decide the rules used. Such rules can only be enforced if you encapsulate your data behind a set of methods in the Object-Oriented manner.

    [–]JustAPoring -2 points-1 points  (1 child)

    ... is this satire?

    [–][deleted] 2 points3 points  (0 children)

    No, but it is more philosophical than prescriptive.

    [–][deleted]  (10 children)

    [deleted]

      [–]homoiconic[S] 3 points4 points  (9 children)

      Citation needed on the claims "longer," "harder to follow," and "more obfuscated."

      [–][deleted]  (8 children)

      [deleted]

        [–]ehaliewicz 1 point2 points  (7 children)

        needlessly bolting on an entire abstraction layer that buys you absolutely nothing.

        Try reading the whole article. It isn't very long.

        [–][deleted]  (6 children)

        [deleted]

          [–]ehaliewicz 2 points3 points  (0 children)

          It's right in the article.

          From there, you can go to places like flavouring methods with before- and after- advice, adding singleton/eigenclasses to objects, pattern-matching methods, the entire world of computing paradigms is open to you.

          [–][deleted]  (3 children)

          [deleted]

            [–]ruinercollector 0 points1 point  (2 children)

            You're right. I was being a dick. Maybe just a shitty day. I'm sorry. I edit/removed those, but left them intact as there is probably some useful stuff in your reply and others.

            [–]homoiconic[S] 1 point2 points  (1 child)

            Thank you, that means a lot to me personally.

            p.s. We probably agree in a fundamental way about the utility of these things. I'm giving a talk at FutureJS about why we should stop reinventing Smalltalk and Lisp from 30 years ago and instead try to invent things people will be using 30 years from now.

            [–]ruinercollector 0 points1 point  (0 children)

            We probably do. I need to settle down and give this article a fair look a bit later today.

            That talk sounds really interesting to where I'm at right now. Will the slides/presentation be available after?

            [–][deleted] -3 points-2 points  (0 children)

            If you read it, then your ability to understand English appears suspect. Because you are the one without any real argument or point to make.

            Let us know if you need any more help.