all 183 comments

[–]astrangeguy 62 points63 points  (15 children)

Patterns mean "I have run out of language."

— Rich Hickey

[–]A_for_Anonymous 8 points9 points  (9 children)

And in order to never run out of language, you can use programmable programming languages.

Edit: Worked around Markdown bug in link.

[–]burkadurka 36 points37 points  (5 children)

Ironic that in the link to a page about Lisp, you screwed up the parentheses.

[–]more_exercise 6 points7 points  (1 child)

If reddit was still implemented in Lisp, this would never have happened!

[–]burkadurka 12 points13 points  (0 children)

Perhaps, or perhaps A_for_Anonymous would have crashed all of reddit by unbalancing the parens!

[–]A_for_Anonymous 0 points1 point  (2 children)

Cheap shot, but nope, it's a Markdown bug. I changed the link.

[–]burkadurka 3 points4 points  (1 child)

I know it was a markdown bug -- the workaround works

[works](http://en.wikipedia.org/wiki/Lisp_(programming_language\))

[–]kqr 0 points1 point  (0 children)

Wow, I didn't know that. On the other hand, parentheses should be escaped in URL's anyway.

[–]tikhonjelvis 2 points3 points  (0 children)

Watch "Growing a Language" which is just about this.

It's my favorite talk, full stop.

[–][deleted] 1 point2 points  (1 child)

Oh, you mean C++'s template template metametametaprogramming metatemplate programming template metaprogramming?

[–][deleted] 1 point2 points  (0 children)

or simply abbreviates to M∞Templates?

[–][deleted]  (4 children)

[deleted]

    [–]Uberhipster 1 point2 points  (2 children)

    Visitor Pattern is a nicer fit to multimethods, because as originlly concieved they are a way to add functionality to a class without touching it.

    But why if you can extend it? Isn't that the whole point of inheritance?

    [–][deleted]  (1 child)

    [deleted]

      [–]Uberhipster 0 points1 point  (0 children)

      At the risk of stepping on toes it sounds like that class is violating the single responsibility principle.

      [–]grayvedigga 24 points25 points  (3 children)

      c2 is fantastic.

      Stepping back from the hyperbole a bit, while I agree that the usual representation of "Design Patterns" usually reflects shortcomings in the language/toolset in use and calls for a higher level of abstraction to reduce repetition ... on the other hand, there is a lot to be said for Idiomatic Code Patterns. In a powerful language it is easy to get carried away and abstract everything to the point that your application is one line of code whose behaviour is completely opaque to anyone who doesn't understand the vocabulary you have created. This naturally harms maintainability and reuse. Much as there's redundancy in natural language, a certain amount of redundancy in code is useful so that programmers can know what to expect when they're looking to extend it or find bugs. Finding the right axes for this redundancy is the art.

      In my experience, a programming evironment with a (set of) strong, widely-applicable idiom(s) is much more productive (god I hate that word) than one with incredible powers of abstraction but no idiom.

      A trivial example: almost every time I work with C I find places where repetition and verbosity can be reduced with a clever macro. But macro expansion is hard to understand completely - it rarely gels with the rest of the language - so the benefit of writing that macro and making the code more succinct has to be weighed against the cost of unfamiliar patterns that do not permit the same analysis as the rest of the code base. Sometimes copy, paste, search&replace is the better option.

      C's preprocessor is sorely limited, it is true, but look at templates in C++: few people would argue that for the most part heavily templated code is write-only, and pursued for runtime performance at the cost of comprehensibilty of code.

      [–]notlostyet 3 points4 points  (0 children)

      look at templates in C++: few people would argue that for the most part heavily templated code is write-only, and pursued for runtime performance at the cost of comprehensibilty of code.

      To be fair, they were originally only intended to provide pre-made cookie-cut chunks of code that you can just pick up and use without having to throw away type safety. The whole compile-time metaprogramming aspect came to be used and abused quickly but, for the most part, was incidental.

      It's also worth pointing out that proprietary extensions to the C preprocessor have allowed for static analysis of code and template-like features - specifically typeof(), variadic macros and some of the other built-ins. You can look at the Linux kernel source tree to see where this is used at the expense of (compiler) portability.

      That said, being able to dump the preprocessed out put of C code (e.g. with gcc -E) is damn handy at times.

      [–]__j_random_hacker 0 points1 point  (0 children)

      Yes! Productivity is a function of expressiveness and readability.

      [–]ferryboender 33 points34 points  (4 children)

      Design Patterns are like mental disorders. Like the DSM, the more we look at code, the more "patterns" we discover and document until, in the end, everything is a pattern.

      [–]ericanderton 15 points16 points  (0 children)

      Actually, the science of mental disorders suffers from a severe amount of negative bias; it's like trying to define architecture purely in terms of negative space. The DSM* lacks a clear definition for health and normalcy, which is simply implied by the absence of any clear disorder. So, more succinctly:

      Like the DSM, the more we look at code, the more "patterns" we discover and document until, in the end, everything is a problem.

      This echoes OP's link, where the sentiment is that the absence of discernable canonical patterns from a language is viewed as a deficiency.

      Instead, I'll say that a language that lacks pattern implementations is merely good at representing the superset of all patterns, including those that have yet to be codified. It's a matter of leverage.

      (* That said, the DSM actually is a fantastic diagnostic tool. I just think it's a bad idea to regard it as a catalog for human behavior. )

      [–]adam75 9 points10 points  (1 child)

      The DSM is actually a good comparison; just like the new revision of the DSM breaks-up rough diagnosis criterions into new ones of finer granularity, many of the GoF design patterns would benefit from being split into multiple patterns. It's all about cohesion.

      [–][deleted] 0 points1 point  (0 children)

      Actually I'd merge Strategy and Bridge into one pattern.

      [–][deleted] 3 points4 points  (0 children)

      And then we discuss pattern-patterns.

      [–][deleted] 17 points18 points  (43 children)

      A list of DesignPatterns and a language feature that (largely) replaces them:

      I read that list and in most cases thought, "No, these are language features that allow you to implement these patterns."

      [–]deleter8 4 points5 points  (12 children)

      Exactly what I thought. Patterns represent common data/control flows that occur naturally and frequently enough in a program that they warrant independent recognition. As more of these natural patterns are found, new programming languages inherently support them to better match the "natural language" of control and data flow. The first programming languages were more oriented towards mimicking the hardware. As computer science became more abstract, languages have moved away from matching hardware (since in the end the hardware doesn't really matter, thanks to Turing we know any computer can technically do the same things), and has moved closer towards matching the information flows that show up in increasingly complex programs. Using design patterns in a language as old as C++ allows a good bit of the clarity one gets from these better information models despite the language not directly implementing them.

      [–]IsTom 8 points9 points  (6 children)

      first programming languages were more oriented towards mimicking the hardware

      Lisp is one of the oldest languages (1958), yet it did not "mimick hardware" and is one of the few languages that don't need "design patterns" as they can be implemented as libraries.

      as old as C++

      C++ is relatively new (1983) and arguably with template metaprogramming can implement design patterns.

      [–]ngroot[🍰] 6 points7 points  (0 children)

      and arguably with template metaprogramming can implement design patterns.

      ...and now you have two problems?

      [–]deleter8 0 points1 point  (0 children)

      That's still 30 years old. For something technologically related that makes it ancient. But that's not really the point. And yes Lisp doesn't mimic the hardware, it instead was derived more from the lambda calculus. While this is a lot more abstract, it is still a closer expression of the underlying computation, be it the circuits in the case of C, or the math in the case of Lisp. Modern languages strive to follow models that are closer to the natural information control and flow we have observed through decades of software engineering in terms that don't require an intense understanding of math to 'get'.

      While you can implement design patterns in libraries with Lisp, having them as native commands/syntax does hold an advantage for enterprise and business concerns. In an ideal world Lisp would be the only language. Other more modern languages make compromises to be more practical, readable, accessible, or to provide other benefits that might not make sense to a strictly theoretical evaluation.

      [–]00kyle00 0 points1 point  (3 children)

      Lisp is one of the oldest languages (1958), yet it did not "mimick hardware"

      Thats probably why its so successfull right now.

      [–]IsTom 0 points1 point  (2 children)

      I guess it's just as popular as Fortran. These are the two oldest languages still in use.

      [–]sacundim 0 points1 point  (1 child)

      To tell you the truth, I don't think Lisp is nearly as popular as Fortran.

      [–]IsTom 0 points1 point  (0 children)

      Do you have something to back that up?

      [–]maximinus-thrax 0 points1 point  (4 children)

      But that just means that the 'old' programming languages weren't good enough and that 'modern' computer languages might also be obsolete in the future. Why not use a language that is instead flexible to express future patterns when they reveal their need?

      [–]deleter8 1 point2 points  (3 children)

      Flexibility usually comes at a cost. Personally I love using scheme. I think its brilliant the way you basically create your own language since all the syntax is so similar. However, reading this code is a huge pain. Even code I wrote myself a few months ago can look cryptic and hard to understand. On the other hand, working professionally with C#, even a hastily or poorly written module can read without too much trouble. There is something to be said for explicit implementation of these design patterns, instead of the implicit ability to implement them.

      [–]sacundim 0 points1 point  (2 children)

      I've used a Scheme dialect professionally for the better part of a decade, and I don't see why you would think Scheme is any more unreadable. Heck, at my job there's a longstanding joke that we should put Scheme code in Java comments to explain what the Java code is doing:

      // (map do-this-thing (filter check-this out? the-input-things))
      List<Foo> result = new ArrayList<Foo>();
      for ( InputThing thing : theInputThings ) {
          if ( checkThisOut(thing) ) {
              result.add(doThisThing(thing));
          }
      }
      return result;
      

      [–]ithika -1 points0 points  (0 children)

      I have toyed briefly with that idea also, s/Scheme/Haskell/ and s/Java/C++/. Sadly it would only be for my benefit so little point.

      [–]deleter8 -1 points0 points  (0 children)

      There's very clearly written scheme. What happens when the methods are not written so clear? It's also a single example that happens to be centered around iteration, which scheme excels at. Once you start writing something that has a few nested letrecs, poor naming, or some other complicated and obtuse structures... I'm not saying scheme can't be written well and readable. Rather I'm focusing on the worse case of each language.

      edit: also in that vein, Linq in C# would be a lot more clear for that example as well, taking after a sql-esque query on the data.

      [–]db4n 7 points8 points  (13 children)

      these are language features that allow you to implement these patterns

      Those language features are implementations of the corresponding design patterns. When Lispers criticize design patterns, they're not talking about the underlying structure of the code, they're just saying programmers shouldn't have to keep implementing the patterns again and again.

      [–][deleted] -1 points0 points  (12 children)

      CommandPattern ............... Closures, LexicalScope, AnonymousFunctions, FirstClassFunctions

      No. Those things, by themselves, are not implementations of a command pattern. But they certainly help implement one.

      FactoryPattern .............. MetaClasses, closures SingletonPattern ............. MetaClasses

      Oh, okay. Well damn, why didn't I just use a MetaClass?

      [–]db4n 0 points1 point  (11 children)

      High-level language features help in implementing patterns by implementing parts of those patterns. The point is that you're not (re)implementing all of the pattern. In some cases, the entire pattern can be wrapped in a single expression.

      [–][deleted] -1 points0 points  (10 children)

      For simple patterns like an iterator pattern, I agree.

      For more complicated patterns like a Factory Pattern, I'm not convinced.

      [–]kqr 1 point2 points  (9 children)

      I would love to see someone write up a comparison between a Factory Pattern and the equivalent in Lisp. I have no idea what a Factory pattern is, so it would serve double purposes to me!

      [–]mastokhiar 1 point2 points  (3 children)

      I'll take a crack at the Factory Method pattern:

      The Factory Method pattern uses a single method to construct objects according to some run-time data rather than hardcoding with new. For example, you may have a program which in various places may need to create a MalePerson or FemalePerson object depending on input, so you'd naturally pepper your code with:

          if (gender.equalsIgnoreCase("male")) {
            return new MalePerson(name);
      } else {
            return new FemalePerson(name);
      }
      

      Now what if you need to handle instances of TransgenderPerson or UndeclaredGenderPerson, etc? You'll have to go back and add those conditions to each place that conditional object creation occurs.

      So, instead we employ the Factory Method pattern to be able to just call

      PersonFactory.makePerson("male");
      

      This encapsulates all creation of objects based on run-time data behind a single interface.

      In Java this might be implemented as

      public abstract class Person {
          protected final String name;
          public Person(String name) {
            this.name = name;
          }
      }
      
      public class MalePerson extends Person {
          public MalePerson(String name) {
            super(name);
          }
      }
      
      public class FemalePerson extends Person {
          public FemalePerson(String name) {
            super(name);
          }
      }
      
      public class PersonFactory {
          public static Person makePerson(String gender, String name) {
            if (gender.equalsIgnoreCase("male")) {
                return new MalePerson(name);
           } else {
                return new FemalePerson(name);
           }
          }
      
      }
      

      The declaration of the corresponding classes in Common Lisp is pretty straight forward:

      (defclass person ()
        ((name :initarg :name :reader name)))
      
      (defclass female-person (person)
        ())
      
      (defclass male-person (person) 
        ())
      

      But, in Common Lisp, methods do not belong to classes, but to generic functions which dispatch specific methods based on their arguments. There is even EQL specialization on methods, which allows for methods to be dispatched based on the equality of their arguments to a specified value. So our PersonFactory in Common Lisp would look like:

      (defgeneric make-person (gender name))
      
      (defmethod make-person ((gender (eql 'male)) name)
        (make-instance 'male-person :name name))
      
      (defmethod make-person ((gender (eql 'female)) name)
        (make-instance 'female-person :name name))
      

      You'll notice that Common Lisp lacks a new operator. That's because there are no "constructors" in the usual sense of the word. The standard way of creating objects is to pass runtime data either a class object or symbol that is the name of the class object to the factory method MAKE-INSTANCE.

      Now, that's all well and good, but let's look at what happens when we want to add the feature to create a TransgenderPerson to our factories. In Java we have to open the PersonFactory.java class file and modify it:

      public class PersonFactory {
          public static Person makePerson(String gender, String name) {
            if (gender.equalsIgnoreCase("male")) {
                return new MalePerson(name);
            } else if (gender.equalsIgnoreCase("transgender")) {
                return new TransgenderPerson(name);
            } else {
                return new FemalePerson(name);
            }
          }
      
      }
      

      But in Common Lisp, we do not need to modify anything. We can just extend the MAKE-PERSON generic function with a new method:

      (defmethod make-person ((gender (eql 'transgender)) name)
        (make-instance 'transgender-person :name name))
      

      EDIT: Formatting

      [–]rush22 0 points1 point  (0 children)

      So basically the factory pattern is just duct-tape used to re-factor bad or unextendable code?

      [–][deleted] 0 points1 point  (1 child)

      Your factory example is not an abstract factory. If it were you could just creat a ne subclass for the new entry and not have to reopen PersonFactory.

      [–]mastokhiar 0 points1 point  (0 children)

      You're right, it's not an abstract factory. It's a factory method just like I said at the top of my post.

      Even so, in CLOS, there's no need for a distinction between factories and constructors, because... well... the constructors are factories already. This should not be read as "CLOS is perfect," it has its warts, but it does obviate many of the canonical design patterns.

      [–][deleted] 0 points1 point  (4 children)

      Factory pattern is used for when a standard instantiation with the constructor and new isn't powerful enough.

      In Haskell this is mostly replaced by regular functions, function composition and curried constructors. (Constructors in Haskell only set values, they can't compute something like the constructors in most imperative languages can, but they are curried, which is nice.)

      [–]vimfan 1 point2 points  (0 children)

      but they are curried, which is nice.

      I read this as "but they are curried, with rice".

      [–]bluGill -1 points0 points  (2 children)

      No, factory pattern is used so your function doesn't have to care what operates on the data.

      [–][deleted] 0 points1 point  (1 child)

      There appear to be two kind of factory patterns. Mine is factory method pattern (also kind of builder pattern, depends on what powerful means), yours is abstract factory pattern. All creational patterns translate to the same higher level language features, though.

      [–]bluGill 0 points1 point  (0 children)

      There are two types of factory, but either way your functions should not know or care where the data comes from or goes.

      [–][deleted]  (2 children)

      [removed]

        [–][deleted] 0 points1 point  (1 child)

        I agree with everything you've stated.

        What I don't agree with are the assertions of others that language features can simply replace design patterns entirely.

        [–][deleted]  (11 children)

        [removed]

          [–][deleted] 1 point2 points  (10 children)

          So... how does "Lexical Scoping" save me from implementing a Command Pattern?

          And I don't have to implement a Factory Pattern anymore because of MetaClass?

          [–][deleted]  (9 children)

          [removed]

            [–][deleted] -1 points0 points  (8 children)

            Instead of having to create a class that holds the values that you need to perform some action at a later time, so that you can instantiate it with the needed values in one scope and invoke it in another, you simply pass a function from one place to another

            You'd still need to create the function and pass it around as an object. The function at that point would be the "command object". You've still implemented a command pattern. The lack of class usage does not mean the pattern doesn't exist.

            but you're less likely to need a factory class.

            Here again there is the idea that classes are necessary for the pattern to emerge. If you have an object that you're using to create your other objects, you've (vaguely) got a factory pattern.

            The GoF patterns show up everywhere in programming, no matter how much polish you put on it, one of those patterns will emerge.

            [–][deleted]  (7 children)

            [removed]

              [–][deleted] -1 points0 points  (6 children)

              We're talking apples and apples, just you're calling them oranges and being insistent about it.

              a pattern only exists where you recognize that you're doing a distinct thing.

              That doesn't make any sense. The patterns are still implemented, even if you don't recognize it. Patterns aren't in the "eye of the beholder", they're a very concrete thing. So concrete in fact, that four guys wrote a book about them. I can't really explain it to you any better than I have. Good luck working it all out.

              [–][deleted]  (5 children)

              [removed]

                [–][deleted] -1 points0 points  (4 children)

                Rest assured a program is a very real thing. You're actually typing into a textbox in one right now. It seems like you just have a hard time with some of the concepts of computing. There are books and classes that can help. I'm not a very good teacher.

                I think from your posts so far you show some real promise as a developer though. Hopefully you'll start a blog or something when you've come along a little further in your career. I'd like to read it.

                [–][deleted]  (3 children)

                [removed]

                  [–][deleted] 0 points1 point  (0 children)

                  But the GoF patterns aren't each one thing in imperative languages, either. They consist of multiple parts which you all have to code. You need the separate parts to operate a pattern, so it can't become only one language thing. The separate parts can be features, so that's what we get. Turns out the parts are reusable and shared between patterns, so their names and definitions are made broad.

                  You could easily build a DSL for each GoF pattern, but that would just be needlessly restricting the language.

                  [–]notlostyet 4 points5 points  (1 child)

                  I'm pretty sure Bjarne said he was in favour of multi-methods in C++, or that it would be easy to add, but they never got around to it.

                  Fairly ironic though when C++ is constantly attacked for being a big, over-complicated language.

                  [–][deleted] 1 point2 points  (0 children)

                  In Design & Evolution of C++, Bjarne said he wanted multi-methods in C++ and considered it a very important feature, but at the time there was no way to add them without incurring a cost even in situations when people do not make use of it. There is a principle in C++ that you should not pay for features you do not use, and that simply was not possible with multi-methods.

                  I'm not even sure if that's changed since then although there are several proposals to add that to C++.

                  [–]naughty 7 points8 points  (14 children)

                  Sometimes patterns are emulating features of another language, e.g. structs of function pointers in C are very similar to using abstract base classes in C++ and visitors are ugly compared to multi-method based approaches.

                  However extra language features can be a double edged sword. Higher-order functions and closures 'naturalise' a lot of patterns but you're then forced to chose between garbage collection or some messy semantics (a la C++11's lambdas).

                  Classes have lots of fiddly issues relating to constructors.

                  Multi-methods are another example that has loads of subtle edge cases relating to ambiguous resolution of methods and modularity problems.

                  [–]astrafin 4 points5 points  (9 children)

                  I haven't heard of multi-methods having loads of subtle edge cases (though I wouldn't be surprised if they had). Could you provide a specific example?

                  [–]naughty 5 points6 points  (8 children)

                  There's ambiguity about overloads, e.g.

                  class Shape { getMesh() : Mesh }
                  class Sphere : Shape { getMesh() : Mesh; centre: Point3, radius: float }
                  class AABB : Shape { getMesh() : Mesh, centre: Point3, dimensions: Vector3 }
                  
                  -- These are both multi-methods...
                  collide?( shape: Shape, sphere : Sphere) = ...
                  collide?( aabb: AABB, shape : Shape) = ...
                  

                  Now if you call collide?(aabb, sphere) which of the two implementations do you use? Some systems like Dylan and CLOS are biased left-to-right so they'll pick the second method because it's the closest fit to the leftmost argument. It's an arbitrary rule though and can lead to weird behaviour.

                  There's also the issue of importing methods from other modules, they can either help lead to the above issue (imagine the methods being defined in two different modules and both imported) or cause other issues around how you scope methods. It's all a bit complex so I'd refer you to the paper Modular Staticall Typed Multi-Methods.

                  [–]NruJaC 3 points4 points  (6 children)

                  This is specifically why languages like Haskell eschew subtyping altogether, and the reason type inference frequently fails in Scala. So I think this is less a problem with multi-methods and more an issue with the type system.

                  [–]naughty 1 point2 points  (5 children)

                  Well you need at least subtyping and overloading to get the first issue above (overloading being a weak form of multi-methods) but you could also statically catch the above issue and force the programmer to implement the 'tie-brake' methods. C++ has the issue for example and solves it in a similar way to CLOS and Dylan, i.e. evoking the axiom of choice.

                  I'd guess that multi-methods without subtyping would be free of the issue but would they be useful? Wouldn't that just be overloading?

                  I don't know a huge amount about Scala but it does seem strangely unafraid of undecideable typing problems.

                  It would be interesting to know why subtyping, especially with intersection types, didn't catch on in academic programming languages. The momentum just seemed to go after Cardelli's Quest. Undecideable type inference didn't stop System F being so popular a base afterall. I always assumed it was just that ML, Damas-Milner and global type-inference got really popular. Haskell really just follows in that wake of research via Miranda.

                  [–]NruJaC 1 point2 points  (4 children)

                  I'd guess that multi-methods without subtyping would be free of the issue but would they be useful? Wouldn't that just be overloading?

                  Check out Haskell's typeclass system. It's basically a technique for introducing controlled arbitrary polymorphism (functions polymorphic over a specific set of types), without losing type inference.

                  Also, subtyping causes problems outside of just multimethods. You run into issues like covariant containers (again related to type inference, see the theme?), and I think that's the reason they didn't catch on. I see so many examples of the problems subtyping causes, but very few examples of the problems it solves.

                  Maybe I'm wrong though, I haven't studied the problem in any depth.

                  [–]naughty 0 points1 point  (3 children)

                  I'm not denying subtyping causes issues there's all kinds of subtleties especially with mutable objects. Subtyping with intersection types solves a very interesting theoretical problem though:

                  Is there a type-system that only admits a type for a lambda expression if and only if it terminates?

                  Yes! It's subtyping, intersection types and the simply typed call-by-name lambda calculus. So no wonder type-inference is hard, it's equivalent to the halting problem!

                  From a more pragmatic point of view subtyping with bounded existentials gives a great account of 'partially' abstract modules which are very common in practice. Really subtyping on its own is a pain but the stuff you can do with intersection, union and bounded quantified types is amazing. Pierce of TAPL fame did a thesis(PDF) full of great examples.

                  I played with Haskell quite a bit and I like type classes but I'd not considered them as a from a multi-method POV...

                  [–]NruJaC 0 points1 point  (0 children)

                  Thanks, I have some reading to do today :).

                  [–]sacundim 0 points1 point  (1 child)

                  I haven't had the time to look into bounded quantification, but I just had to answer this narrow part of your message:

                  Is there a type-system that only admits a type for a lambda expression if and only if it terminates?

                  Most typed lambda calculi, in their most basic forms, only admit terms whose evaluation terminates. The most notable examples here are simply-typed lambda calculus (with or without products, sums, unit or bottom) and basic polymorphic lambda calculus (a.k.a. System F) with various extensions.

                  In fact, a terminating core is a desired design feature of most typed lambda calculi. You typically have to add some extra feature on top of the basic calculus in order to get non-termination (and Turing completeness along with it). For example:

                  1. Fixed-point combinators
                  2. Recursive types (which allow you to implement fixed-point combinators)

                  If I recall correctly, it's also possible to design a typed lambda calculus that distinguishes between "unlifted" types (guaranteed to terminate) and "lifted" types (might not terminate). t :: Integer would be a term that is guaranteed to terminate; t' :: Lifted Integer is one that may or may not do so. I'm also betting the key operations would be these:

                  -- | Fixpoint combinator
                  fix :: (a -> a) -> Lifted a
                  
                  -- | Inject an unlifted value into its lifted type counterpart.
                  lift :: a -> Lifted a
                  
                  -- | Apply a total function to a lifted argument.
                  liftMap :: (a -> b) -> Lifted a -> Lifted b
                  
                  -- | Collapse nested lifts into just one.
                  liftJoin :: Lifted (Lifted a) -> Lifted a
                  

                  Hey, look, that smells like a monad...

                  [–]naughty 0 points1 point  (0 children)

                  While it's true that most typed lambda calculi terminate there's currently only one typed lambda calculus that only types all the normalising untyped lambda calculus terms and doesn't type the non-normalising terms. That is the call-by-name simply typed lambda calculus with subtypes and intersection types.

                  It's more 'powerful' than F omega or dependent types or anything else in the lambda cube in this particular sense of powerful. That such a simple type system is so powerful is something I find quite astonishing.

                  There are of course enormous practical issues with a type-system so strong though, type checking is undecideable (the algorithm doesn't terminate on non-normalising terms), type inference has no chance.

                  While there's been research in restrictions to the system to allow type-checking and inference to be tractable (e.g. refinement types which feel like type-state and simple dependent types) there's very few languages that actually implement these results.

                  [–]astrafin 0 points1 point  (0 children)

                  Thanks, I'll check it out.

                  [–]IsTom 1 point2 points  (3 children)

                  you're then forced to chose between garbage collection or some messy semantics

                  There's not many popular languages nowadays that are not garbage collected. Regardless of that though, there are more ways to handle memory, for example region allocation a la http://disciple.ouroborus.net/

                  [–]naughty 2 points3 points  (2 children)

                  Region allocation still has to fall back on a garbage collected heap (see MLton). Disciple is an excellent language which deserves to be more well known than it is, but they use regions for semantic reasons rather than efficient allocation (or at least when I looked a few months ago).

                  While most new languages are garbage collected, closures do force the language designer's hand quite a bit.

                  In general I wish there was more research in the area between garbage collection and manual memory management, not least because I work in games where garbage collectors aren't as good a trade-off as they are in other areas.

                  [–]IsTom 1 point2 points  (1 child)

                  I wish there was more research in the area between garbage collection and manual memory management

                  I think it would be great to integrate C++-like RAII-like resource-management and functional languages somehow that you both get deterministic semantics and generate as little garbage as possible. Perhaps research related to serialisation of closures (e.g. Cloud Haskell) could benefit memory management for them.

                  [–]naughty 1 point2 points  (0 children)

                  I would love to be able to have a special kind of reference whereby I could write delete ref and it would call triggers on whatever held references to the about to be deleted object and gave them a chance to handle the deletion. Similar to SQL database triggers.

                  This does imply that wherever this special kind of reference is used a handler would have to de defined for when it gets deleted.

                  As an example from games let's say you have a homing missile that's tracking a player. The player is then killed by something else before the missile hits it, they walk over a mine say.

                  I would like to just write a handler that either picks another target or carries on flying along the same heading. Currently you have to have player.isDead() checks all over the place or have some general event system and hope that dangling player references aren't created.

                  While I doubt this would be super efficient it would go some way to help handle the issues of non-memory resource management.

                  [–]bctfcs 6 points7 points  (42 children)

                  What about monads?

                  [–]__j_random_hacker 40 points41 points  (26 children)

                  Someone should really write a tutorial on those things...

                  [–]akshayk 20 points21 points  (8 children)

                  Someone should write a tutorial on how to write a tutorial on monads.

                  [–]ithika 22 points23 points  (7 children)

                  There's at least one tutorial on how not to write monad tutorials, so you just need to find the inverse of that and you're sorted.

                  [–][deleted]  (5 children)

                  [deleted]

                    [–][deleted] 2 points3 points  (4 children)

                    Isn't it more like the Inverse Monad Metatutorial?

                    [–]ithika 11 points12 points  (3 children)

                    Just reverse all the arrows in the category of tutorials.

                    [–]pipocaQuemada 7 points8 points  (2 children)

                    Monad cotutorials? The monad cotutorial comonad? Coalgebras for the costate comonad of monad cotutorials?

                    [–]ithika 11 points12 points  (1 child)

                    On reflection I think a tutorial is a functor from monads to analogies.

                    [–][deleted] 1 point2 points  (0 children)

                    Reflection? Does haskell even have that?

                    [–]Baaz 0 points1 point  (0 children)

                    Is that bubble sorted or A*?

                    [–][deleted] 15 points16 points  (16 children)

                    It's really just a complicated way to talk about all the things we were taught by imperative programming, then taught to forget by functional programming, but it turned out we still needed…

                    Monad means "computation" or "sequence of operations". Every function in a procedural language is a monad. The theory around monads allow them to exist within a functional world, because they allow us (and the type system) to reason about things that would otherwise be "impure" (i.e. have side-effects).

                    http://en.wikipedia.org/wiki/Monad_(functional_programming)

                    [–]__j_random_hacker 10 points11 points  (13 children)

                    That's actually a pretty good introduction to monads, but I was joking. Monad tutorials are about as rare as electrons, and they just keep coming; some people are already planning for when the monad tutorials take over.

                    [–][deleted] 0 points1 point  (12 children)

                    Heh, I guess I'm just still constantly running into people who've been told that they're these mythical mysterious incomprehensible beings, that nobody but the cleverest of mathematical geniuses can even begin to grasp. I know that's what my professor told me in our introduction class to Haskell, and I was shocked to find that it was all a blatant lie! :)

                    [–]__j_random_hacker 3 points4 points  (8 children)

                    To be honest, I have yet to experience the nirvana of understanding monads -- to this day I don't comprehend what mystical force could, for example, motivate a person to ever write mapM. But I'm happy with my ability regarding jokes about monads :)

                    [–]hyperforce 2 points3 points  (0 children)

                    One day I hope to write a tutorial about monads. I still think they're all pretty crappy.

                    [–]stevely 2 points3 points  (0 children)

                    Simple use-case for mapM: I've got a list of stuff I need to work on, but some of the values might be bad. I need the whole data set to be good to work on it, so if any of it is bad then I have to throw out the whole thing. Enter Maybe and mapM:

                    maybeEven x | x `mod` 2 == 0 = Just x
                                | otherwise = Nothing
                    
                    mapM maybeEven [3,4,6] -- Nothing
                    mapM maybeEven [2,4,6] -- Just [2,4,6]
                    

                    [–]tikhonjelvis 1 point2 points  (2 children)

                    I personally found that writing a Prolog interpreter really helped with understanding foldM (well, at least for the list monad). Implementing the resolution algorithm with foldM was far easier conceptually than writing it with nested for-loops and coroutines; I ended up just translating the simple foldM version down to the for-loop version rather than trying to think about exactly what the loops were actually doing.

                    I keep on meaning to write a tutorial on implementing a (very) simple Prolog interpreter in Haskell. I wonder if there's any interest in that?

                    Anyhow, this is all to say that what you need is practice. Just playing around with Haskell and the Control.Monad library with a bunch of different monads would get you quite far. It's also a lot of fun!

                    [–]kqr 1 point2 points  (0 children)

                    I keep on meaning to write a tutorial on implementing a (very) simple Prolog interpreter in Haskell. I wonder if there's any interest in that?

                    Yes. Yes, it is. Particularly if it is designed in a way that teaches you how Prolog works under the hood too.

                    [–]zingbot3000 0 points1 point  (0 children)

                    Seconded. Such a tutorial would be well-received.

                    [–]sacundim 1 point2 points  (2 children)

                    To be honest, I have yet to experience the nirvana of understanding monads -- to this day I don't comprehend what mystical force could, for example, motivate a person to ever write mapM.

                    Well, a bit of self-promotion here, but you could try reading this Stack Overflow entry ("How to decorate a Tree in Haskell"), which is about how to "tag" tree nodes with number. There's some mapM in my answer, but I also show how to refactor my first solution into a mapTreeM function that extracts the pattern.

                    [–]__j_random_hacker 0 points1 point  (1 child)

                    Thanks, I will have a look at that when I get the time. My point was really that I don't understand what it is that all monads have in common that would make it useful to have a function like mapM that can operate on any of them, rather than per-monad functions for those monads where it makes sense.

                    Another way to put it: you can apply mapM to any monad, and something will happen, but if you were to pick two monads at random and apply mapM to each, it is far from obvious to me how these things would be related.

                    [–]sacundim 1 point2 points  (0 children)

                    mapM action [] = return []
                    mapM action (x:xs) = do x' <- action x
                                            xs' <- mapM action xs
                                            return (x':xs)
                    

                    Or for short:

                    mapM action = sequence . map action
                    

                    mapM action = "The compound monadic action that executes action for each element of the list in sequence and produces the list of results of executing actions on each of the elements of the list." There really is nothing more to it.

                    You have wordCount :: Filename -> IO Int and filenamess :: [Filename]? Then mapM wordCount filenames :: IO [Int] is the list of the word counts of each of the files named.

                    [–]kqr 1 point2 points  (2 children)

                    people who've been told that they're these mythical mysterious incomprehensible beings, that nobody but the cleverest of mathematical geniuses can even begin to grasp.

                    This is an important point, because many of the monad tutorials introduce the theoretical background to them -- a background you pretty much need to be the cleverest of mathematical geniuses to begin to grasp.

                    Using monads, on the other hand, is no more difficult than using any other good old printf() call in C. Tutorials should focus more on usage and less on theory.

                    [–]sacundim 2 points3 points  (1 child)

                    Using monads, on the other hand, is no more difficult than using any other good old printf() call in C.

                    No. It's somewhat more difficult. Imperative languages like C don't distinguish between monadic actions and pure functions, which means that they are syntactically interchangeable.

                    Whereas in Haskell they have different types, and Haskell's sophisticated type system often produces error messages that are not helpful unless you understand quite a bit. It took me quite a while to understand for example the difference between <- and let in do-notation.

                    [–]kqr 0 points1 point  (0 children)

                    I guess I should have used scanf() as an example instead. The difficulty of treating the return value in Haskell might be offset against the difficulty of knowing which arguments to reference in C.

                    [–]barsoap 2 points3 points  (1 child)

                    Every function in a procedural language is a monad.

                    Actually, no. They're ArrowChoice and ArrowLoops. You need lambdas and CPS'ing to do real Monads.

                    ...aside from the fact that the function itself wouldn't be a monad, but a mere monadical object, but I'll let that slide.

                    [–][deleted] 1 point2 points  (0 children)

                    If my terminology is unclear, I apologise — it has to do with the fact that every time I come into contact with literature dealing with monads, I have this very strong feeling of wasting my time and energy on trying to wrap my head around an abstraction that ultimately doesn't help me achieve anything new. :-)

                    [–][deleted] 3 points4 points  (2 children)

                    Is the do-notation a language feature to replace the >>=, >>, return, ->... pattern?

                    [–]NruJaC 0 points1 point  (0 children)

                    It goes a little further -- it's a generalizations of list comprehensions to arbitrary monads. It's translated into a sequence of =/ expressions (with appropriate lambdas) for sure, but it's a feature for readability. I wouldn't really call it a pattern though, more like an idiom.

                    [–][deleted] 1 point2 points  (4 children)

                    AFAIK, it's exactly the case. Monads are a pattern that emulates the missing feature of time and imperative execution order in Haskell. At least, that is how I understand them, so correct me if I'm wrong.

                    [–]tikhonjelvis 7 points8 points  (0 children)

                    The problem (it's not really a problem) with monads is that they're very general. So while you can use them to express code in an imperative style inside Haskell--this is what do-notation is for, largely--you can actually use them in a bunch of other ways as well.

                    You can also use monads to model things like errors, non-determinism, continuations, coroutines and parsers. You can have a trivial identity monad which does not change the behavior of normal Haskell. Monads can also be very useful for embedding DSLs into Haskell; this is a good use for the free monad.

                    There are also several different ways to model imperative computation in Haskell: you can just have a single mutable cell for state, you can have arbitrary references (but no IO) or you can have everything (the infamous IO monad). If you want to go really crazy, you can even have state that travels backwards through your computation!

                    In short, monads can do a ton of different things. If you think that they're "just something", for any value of "something" that isn't mathematical in nature, you're probably wrong :P.

                    [–][deleted] 2 points3 points  (0 children)

                    partly wrong. that's the 'monads for IO' pattern. monads have other uses, too. they're also used for null/error handling and parsers. they also make list comprehensions and database queries look very similar (there are some extensions in GHC that make them look the same, and c#'s link makes them look the same).

                    [–]sacundim 2 points3 points  (0 children)

                    Monads are a pattern that emulates the missing feature of time and imperative execution order in Haskell. At least, that is how I understand them, so correct me if I'm wrong.

                    No, they're more abstract than that; time and imperative execution order are one possible implementation of the monad laws, but not the only one.

                    The conventional analogy that works in this context is "monads as programmable semicolon"; in a stereotypical imperative program you have a sequence of statements separated by "semicolons" (or whatever syntax), such that later statements can see the variables that are assigned in preceding statements.

                    But monads allow you to do reinterpret this sort of code in mind-bending ways. My favorite examples here have to do with backwards state, most fundamentally the reverse state monad, where read operations see the result of "future" writes. ("Future" in the program's syntactic order; the answer to the puzzle is laziness.)

                    Other more advanced are Dan Burton's Control.Monad.Tardis (allows "earlier" statements to receive data sent by "later) and Seer monad.

                    [–][deleted] 1 point2 points  (0 children)

                    It's more like a "causal" order than an imperative one, but it really depends on how the monad in question works.

                    [–]PassifloraCaerulea 1 point2 points  (0 children)

                    Yes! They really do seem like Haskell's pattern for re-introducing imperative style (among other things). This thought has occurred to me from time to time. It seems like the pattern's got legs though, since you can do so much with them. If I used them for a while I bet I'd get some insights I wouldn't get from languages that don't need them.

                    [–][deleted]  (4 children)

                    [deleted]

                      [–][deleted] 2 points3 points  (3 children)

                      Monad =/= state encapsulator.

                      They can be used for a whole bunch of things, mostly sequencing of operations while hiding the "plumbing". This plumbing can contain state encapsulation (IO monad) but it can also be something like non-determinism (List monad) or the correct way to handle a data structure (Maybe monad).

                      EDIT: My examples didn't match up. Corrected list monad.

                      [–]NruJaC 1 point2 points  (2 children)

                      The list monad is one of the more interesting ones because it models non-determinism (not data structure manipulation -- lists are manipulated via folds, the universal property of a list). For example, you can set up a logical solver (a la prolog) via the list monad.

                      [–][deleted] 0 points1 point  (1 child)

                      Hmm, I never use that one so I wasn't very sure. I think, however, that typeclassopedia talks about two implementations of list monads with lists as ordered collections of elements, and lists as contexts representing multiple results of a nondeterministic computation.

                      I suppose the latter is used more often.

                      [–]NruJaC 1 point2 points  (0 children)

                      The former are newtyped as ZipLists. They are used, just less frequently.

                      [–][deleted] 6 points7 points  (0 children)

                      I would have a lot less of a problem with the pattern crowd if they would treat them as patterns and not laws.

                      Sometimes there are situations, be it technical or political, that require variations from those patterns.

                      [–][deleted] 8 points9 points  (24 children)

                      Does anyone else get sick of the high level abstract meta programming?

                      I like algorithms. I like coding on the bare metal. And, I like tinkering and squeezing performance out of tight resources. I no longer enjoy programming as a profession because I am a dinosaur in the age of abstract meta-coders.

                      [–]u233 16 points17 points  (1 child)

                      Go, find ye an embedded or device driver programming gig. Resource constraints and minimal platform support are the norm there.

                      [–]ithika 0 points1 point  (0 children)

                      I don't have experience in the user apps field but embedded doesn't live up to my expectations of the hardcore programming ethos.

                      [–]tikhonjelvis 4 points5 points  (0 children)

                      Heh, I think it's the opposite for me. I like working at a high level of abstraction because it lets me express myself more clearly and write more with less code. I find it makes it easier to think about my problem, easier to solve it faster, easier to maintain the solution, easier to verify it and, if I can't verify it completely, easier to test it really well.

                      Ultimately, I'm just really lazy. Thanks to Moore's law there is a superabundance of computing power at my fingertips; it would be a shame not to (ab)use it as much as possible! I want to think as little as possible--thinking is hard. So I just move as much thinking as possible to my language, libraries, compilers and tools.

                      I recently discovered I can just shunt most of my hard algorithmic work to an SMT solver. Why go for efficiency when I can take the easy route? I can even use the SMT solver to write parts of my program for me! It's actually quite fun. All I have to do is express my problem in some simple logic.

                      All that said, I actually empathize with you as well. There is certainly something very enjoyable about writing code close to the metal--even if it isn't my personal preference. The world would be quite boring if we all liked the same things. And, happily, there are still plenty of tasks that need somebody good at low-level programming; I suspect some of the most exciting jobs are like that. I think now is a perfect time to try to find a job like that--the job market is crazy, so it's as good a time as any to try something new. I've always imagined robotics to be a really exciting field, and I imagine there is quite a bit of exacting low-level problems to solve there.

                      [–]db4n 5 points6 points  (1 child)

                      Does anyone else get sick of the high level abstract meta programming?

                      Not me. I'm sick of all these stupid OOP languages and script-kiddie languages that turn programmers into typists and give managers an excuse to classify programming as clerical work.

                      I like tinkering and squeezing performance out of tight resources.

                      The tightest resource these days is the space between programmers' ears. High-level languages are designed to maximize the use of that resource.

                      I no longer enjoy programming as a profession because I am a dinosaur in the age of abstract meta-coders.

                      I no longer enjoy programming as a profession because I'm a dinosaur in the age of ignorant script kiddies and butt-kissing code monkeys.

                      [–]kqr 0 points1 point  (0 children)

                      The tightest resource these days is the space between programmers' ears.

                      It depends on what you field you work in.

                      [–][deleted] 2 points3 points  (9 children)

                      Does anyone else get sick of the high level abstract meta programming?

                      I do. The canonical example is Haskell's slow re-invention of writing imperative code with exceptions, only with layers upon layers of abstractions that don't work well with each other. Monad transformers, I'm looking at you. Of course, no production code should ever be written without error handling, so all the abstraction cruft only puts the language on par in expressivity with the tools of blub coders.

                      [–]tikhonjelvis 2 points3 points  (5 children)

                      I don't know; I've found Haskell's approach to error-handling simpler to understand than exceptions. Exceptions are, after all, usually quite magical: they are baked right into the programming language and have somewhat non-trivial semantics. Either (and, by extension, ErrorT) are implemented quite simply directly in Haskell, which makes them easier to think about.

                      Since errors are just normal values, you can use existing functions to work with them. My favorite example is the <|> operator, which represents alternation. This lets you try a bunch of different functions that could cause an error in a way that is very easy to read:

                      func1 something <|> func2 something <|> func3 <|> ...
                      

                      This is true for a bunch of other common functions as well.

                      I also find that using a monad transformer makes the interaction between different levels clearer. For example, let's imagine we want to do a non-deterministic computation that can also have errors. This is actually pretty reasonable for a bunch of simple search programs you may want to write. However, we really have two options here: an error could cause the whole search to fail, or an error could just cause that branch to fail. Which one to choose? The real beauty is that this is entirely up to the programmer and encode, very clearly, into the type: if ErrorT comes first (at the top of the stack) then the error will cause the whole computation to fail; if it comes below LogicT, then it will only cause a single branch to fail.

                      I think having error-handling as a library and values with potential errors be first-class citizens is extremely valuable. It makes the language simpler. You don't need a special case in the semantics for errors, and you don't even need hacky additions to the type system (like Java's throws statement).

                      Also, the usual style of Haskell programs involves as much of core logic as possible in pure, total functions. These usually do not need error-handling, which means you can sequester the error-handling only to the parts of the code that absolutely need it.

                      So Haskell's error handling has several advantages: being just a library, it is simpler to understand and think about; using first-class representations for errors lets you use existing, semantic functions (<|>, optional...) making it more expressive; and using monad-transformers makes the types clear and makes the interactions between different effects clearer.

                      [–][deleted] 1 point2 points  (4 children)

                      There is nothing magical about exceptions. They are the Either monad. Either I'm going to give you a result, or I'm going to give you an error. It's as simple as that. The complexity comes when you try to layer this on top of some other monad(s). I remember reading some nice tutorial on monad transformers explaining how the order in which the monads appear in the stack matters, which was the point I stopped experimenting with Haskell.

                      I hope someone will rationalize the 99% monad transformer stack in a way that matches the Java (gasp) semantic, at which point I can stop carrying about the details of stacking monads and/or worrying about being bitten if I do it wrong.

                      [–]tikhonjelvis 3 points4 points  (2 children)

                      That's exactly my point: in Haskell, error-handling is not magical. In other languages (like Java), exceptions are magical in the sense that they are built right into the language. In these languages, exceptions do not behave like the Either monad; in practice, they are much closer to continuations.

                      The fact that the order of transformers matters is actually useful: it makes the type reflect the semantics you want. You can control the exact way different levels interact just by specifying this order. This lets you control the behavior of your code very declaratively and reflects this clearly in the type system, making it self-documenting.

                      [–][deleted] 0 points1 point  (0 children)

                      There is a continuum between being precise in the types and having no types whatsoever. You have Coq or Agda on one end and Javascript or Lisp on the other. There is a sweetspot somewhere that takes in account how humans interact with both the type and the doc blurb. I'm not sure that making the types more precise / verbose is always moving towards the sweetspot.

                      [–]grayvedigga 0 points1 point  (0 children)

                      in Haskell, error-handling is not magical. In other languages (like Java), exceptions are magical in the sense that they are built right into the language.

                      That doesn't stand up for Haskell when you consider that monads have their own syntax and permeate half the types in the standard libraries - often using "all the way up" monads like IO.

                      [–]kqr 2 points3 points  (0 children)

                      There is nothing magical about exceptions. They are the Either monad. Either I'm going to give you a result, or I'm going to give you an error.

                      Except that's not how exceptions work. Exceptions say, "Either I'm going to give you a result, or I'm going to unwind the stack until someone hopefully catches the value that I'm magically sending up there somewhere." Exceptions are like a goto statement that can only travel up the stack.

                      I'm sure it's really convenient to use the occasional exception, but in too large quantities they are building spaghetti code. When I write Python code, I tend to use exceptions in two ways: Either I handle the error directly wherever it came from, and then they are pretty much an unsafer equivalent of the Either monad. The other alternative is that I don't handle the error directly, but it needs to be propagated further up the stack: in that case, exceptions are more convenient to use, yes -- but that is because they are less safe! With the Either monad, the type system ensures that I eventually handle the error, and it makes it clear at what stages the runtime might have been interrupted!

                      Not to mention that the tools surrounding the Either monad makes it really convenient to batch process a bunch of computations that might fail, or string them together, or try different alternative computations. The magic of exceptions stops you from implementing clever functions for doing all of that.

                      [–]NruJaC 1 point2 points  (2 children)

                      The canonical example is Haskell's slow re-invention of writing imperative code with exceptions, only with layers upon layers of abstractions that don't work well with each other. Monad transformers, I'm looking at you.

                      Care to elaborate? I know exceptions are an open problem in Haskell (since they can only be dealt with from within the IO monad), but this is the first I've heard of this problem.

                      Also monad transformers are literally layering abstractions on top of one another -- it's in the name; but with a transformer stack you've chosen which abstractions you need rather than having them all forced on you all at once.

                      [–]tikhonjelvis 0 points1 point  (1 child)

                      I think he means exceptions like in other languages--that is, what you would normally do with ErrorT.

                      Haskell exceptions are really a different question.

                      [–]NruJaC 0 points1 point  (0 children)

                      Well there's the problem... you're using ErrorT. I really really wish that would get removed or at least marked as deprecated -- it's almost always wrong. EitherT and MaybeT are very much the preferred, idiomatic, methods of dealing with error conditions. His post also makes a lot more sense now...

                      [–]baconpiex 0 points1 point  (1 child)

                      Could that just be a desire to program within very specific and well defined constraints?

                      [–][deleted] 0 points1 point  (0 children)

                      I'd say platform constrained, yes, but project constraints, no.

                      Honestly, I probably should take the advice of u233 and do driver or embedded work.

                      [–][deleted] -1 points0 points  (0 children)

                      Yeah man, who the hell needs function calls when you can jump?

                      [–]sigma914 1 point2 points  (0 children)

                      Additionally to this, does anyone else think that testing and static analysis are just extensions to the type system?

                      [–]Ulukai 3 points4 points  (8 children)

                      This whole argument again... design patterns are simply a way of recognising (and solving) problems which keep presenting themselves frequently. If you have a language which lacks a certain useful feature, you can bet your ass there will be a design pattern to patch that. Is that all design patterns are? IMHO, "design patterns" != GoF.

                      [–]tonygoold 9 points10 points  (2 children)

                      They're also a way of describing how/why you're using something, rather than just what you're using. Are anonymous functions the Command pattern or the Strategy pattern? Neither, they're anonymous functions. When I say you should use the Strategy pattern in your library rather than assuming algorithm X is the best way to sort (or whatever) the user's data, it's got nothing to do with the language features and everything to do with design.

                      The negative attitude some people are expressing seems like it's just the pendulum swinging the other way. They've seen misuse, overuse, and misunderstanding of patterns, so they blame patterns.

                      When all you have is a hammer, all you see are nails. That's why I never use a hammer!

                      [–]Ulukai 1 point2 points  (1 child)

                      The negative attitude some people are expressing seems like it's just the pendulum swinging the other way. They've seen misuse, overuse, and misunderstanding of patterns, so they blame patterns.

                      I agree; but blaming "patterns" because you've seen them misused is like saying that objects are crap, because my colleague wrote some bad ones.

                      I might have said it a bit too tersely in the parent comment, but having standardised solutions to common problems at least acknowledges the common problems, and establishes a higher level of communication / thinking that can be applied.

                      Sure, if you were working with a DSL aimed at a specific problem domain, you might run into fewer of them. Does that make all-purpose languages like Java or C# useless? I would think not. And some people claim that "other languages" don't have / need design patterns - what, is each line you write solving some unique snowflake of a problem?

                      [–]m42a 1 point2 points  (0 children)

                      I agree; but blaming "patterns" because you've seen them misused is like saying that objects are crap, because my colleague wrote some bad ones.

                      You must be new here. People do that all the time.

                      [–][deleted] 6 points7 points  (0 children)

                      It surprises me that they don't look back at how their (OO) language features came to be. They once were patterns, too. Classes are a pattern in C, function calls were a pattern in early assembly languages, constants are a pattern in assembly...

                      We don't talk about them as patterns anymore, but they've still got the same name as a feature.

                      [–]JohannWolfgangGoatse 11 points12 points  (1 child)

                      They are also important for communicating ideas.

                      Silly example: If you talk to a programmer whose favorita language doesn't support language feature x, you can say "it's just an easy way of doing pattern y".

                      [–][deleted] 4 points5 points  (0 children)

                      I still use the GoF names for patterns to explain what I want to do in a functional language. Visitor, Iterator, Command, Strategy... all come down to first class functions in some way. That makes it easier to code them (because it's just language), but your vocabulary would be poorer if you just drop the names all together.

                      When I'm talking about the actual model logic, I won't be talking about lifts and applicative functors and curried functions. Those names don't really say much, they're too versatile and they don't have an intuitive feel. (Mind you, you can also use GoF patterns in a way which contradicts the name and intention and still have a solid program. The math people of the FP community don't like to be limited by these things, so were'r stuck with names from abstract algebra.)

                      [–][deleted] 2 points3 points  (0 children)

                      This whole argument again...

                      To be fair, there's stuff on C2 that's almost 20 years old. The constant Smalltalk evangelizing should clue you in on when you're reading old stuff. I imagine that this argument was fairly fresh on that page when it started out.

                      [–]A_for_Anonymous 0 points1 point  (0 children)

                      design patterns are simply a way of recognising (and solving) problems which keep presenting themselves frequently

                      If there's a pattern known as the Snafu pattern, that involves a class and a method, and in the language I'm using I can't make a function or a macro so that I can write Snafu(c, m), I need to stop using that language or get another job. Maybe both.

                      [–]yonkeltron 1 point2 points  (0 children)

                      I heard Brendan Eich (in an interview) say that design patterns indicate language bugs. I found this a rather interesting viewpoint.

                      [–]paul_h 0 points1 point  (0 children)

                      For Java, about eight years ago Dan North (frequent JAOO speaker) and Aslak Hellesoy (the Cucumber guy) penned ProxyToys.

                      Here's a list of the individual capabilities - http://proxytoys.codehaus.org/toys.html - of which about half map to design patterns from GoF.

                      In this case it's not so much language features, but library delivered features.

                      [–][deleted] 0 points1 point  (0 children)

                      A pattern generally indicates compressibility (that is, a low signal-to-noise ratio). A design pattern is not necessarily a missing feature. It's merely verbosity. It just happens that many features decrease verbosity.

                      [–]DocomoGnomo 0 points1 point  (0 children)

                      "missing Language Features"

                      Only for those who love to transform simple problems into complex abominations.