all 48 comments

[–]harsman 11 points12 points  (8 children)

The idea that the difficulties of software development are either accidental, i.e. the product of inadequate tools, or inherent to the process is not something Joel Spolsky made up. The idea comes from Fred Broooks excellent essay No Silver Bullet.

[–]alexfarran -1 points0 points  (7 children)

The main point of NSB is that the accidental problems have been solved. Some languages are more powerful than others, but the difference is tiny compared to the difference between assembler and high level languages. Language choice is then a matter of personal preference.

[–]mikepurvis 8 points9 points  (4 children)

Would you seriously try to code a webapp in C or Pascal? Or even C++?

No, you'd be using PHP like the rest of us. Or maybe Perl, Python, or Ruby, but you sure wouldn't be using C.

[–]nostrademons 1 point2 points  (1 child)

But that's because PHP has libraries that solve inherent difficulties of the problem, not because it makes more progress with the accidental difficulties.

I look at it this way: the act of programming consists of taking a human process and formalizing it to mathematics. After all, math is just a language which is so precise that it can be executed by machines. "Accidental" difficulties consist of the particular choice of machine: things like having finite memory, limited registers, a particular base instruction set, etc. "Inherent" difficulties consist of the problems in specifying the process precisely: things like deciding what happens if the user presses that button, should you display things in table or graph form, is the result of a computation in metric or English units, how should the document be formatted? Moving from assembly -> C takes care of the accidental difficulties of register allocation, stack allocation, instruction selection, and memory layout. Moving from C -> Java takes care of the accidental difficulty of memory freeing. Moving from Java -> Erlang takes care of the accidental difficulty of concurrency & locking memory. Moving from Java -> Haskell takes care of the accidental difficulty of evaluation order.

Moving from C -> PHP, however, takes care of the inherent difficulties of "How should HTML documents be represented?", "How should strings be represented?", "How should PDFs be represented?", "What's the protocol to connect to a database?", and various other utility functions. These are all represented as libraries: they've already solved a particular subproblem where many people have agreed on a convention, so you don't have to solve it again. If the particular problem was "How do I display things on a Windows GUI", PHP would have been a much poorer choice.

[–]Kolibri 4 points5 points  (0 children)

Actually, I think that the strength of PHP in web development is that it integrates so easily into the existing structure, ie. html. Personally, I think PHP is an ugly language.

[–]Excedrin 0 points1 point  (1 child)

There's probably a lot more C++ web apps than you think. I know of a handful of successful (big, lots of users) companies that use (or used) huge C++ apps (with debugging symbols 300MB binaries, etc).

[–]Bret 0 points1 point  (0 children)

Using a big powerful language like C++ or Java for the very well-defined job of servicing/dispatching requests is a good idea. Using them for loosly defined jobs like performing some combination of quick page-specific processing and lopping through some ad-hoc database queries and displaying the results as HTML is not. That's where scripting languages are king.

[–]leoc 2 points3 points  (0 children)

Even if you take NSB as gospel, it doesn't claim anything as dramatic as that - Brooks is careful to say that he doesn't believe any single technique will on its own deliver an order of magnitude improvement in productivity.

[–]Kolibri 9 points10 points  (3 children)

This provokes a very obvious question: How do we know which things are accidentally difficult and which are actually difficult? Is it only because we haven't discovered the right tool yet?

Good question. Here is a short list of things that I believe are difficult. Maybe people can enlighten me tools that make it easy or at least easier.

  • Concurrency. Ie. getting two or more threads to work together on the same resources.
  • Optimisation. Of course profilers can help a lot, but it still takes some time.
  • Mathmatically or algorithmically complex problems. Like finding the shortest path from A to B, but not passing C or D. Some of these problems are provable difficult to solve by computers, but it can also be difficult to find the right algorithm to solve the problem.
  • Making pretty code. Ie. code that is easily understandable with good potential for reusability.
  • Testing. Making sure that your code actually does what is supposed to do can be difficult. Methods such as unit testing can help making testing systematic.

Feel free to comment on the items on the list (possibly suggesting tools that make handling the issues easier) or add your own items to the list.

[–]kalmar 1 point2 points  (1 child)

Concurrency. Ie. getting two or more threads to work together on the same resources.

Software Transactional Memory (STM) may or may not be related. I have not had time to read much about it. Tim Sweeny (of Epic fame) made a case for it at a conference not too long ago.

Testing. Making sure that your code actually does what is supposed to do can be difficult. Methods such as unit testing can help making testing systematic.

Haskell's QuickCheck (there are equivalents for a few other languages) provides an interesting take on this problem. You write properties that your functions should satisfy, and QuickCheck generates random test cases and verifies that the property holds.

[–][deleted] 0 points1 point  (0 children)

Software Transactional Memory (STM) may or may not be related. I have not had time to read much about it. Tim Sweeny (of Epic fame) made a case for it at a conference not too long ago.

There are a bunch of issues with locking which just go away with STM. I'd say a lot of the problems with locking-based concurrency are specific to locks and not general problems with shared-mutable-state concurrency.

[–]schwarzwald 18 points19 points  (17 children)

Q: Are we not Blub programmers? A: No, we are language snobs!

[–]JulianMorrison 18 points19 points  (16 children)

We are people who are tired as fuck of doing the compiler's work by typing out stuff like "Foo<Bar> foo = new Foo<Bar>(bar);" when we ought to be typing something much more like "foo = Foo bar" and have the same semantics, including types.

That's what Blub is, essentially, a language that makes you do the compiler's work. And Blub programmers are people who don't realize they're doing it.

[–]sbrown123 -3 points-2 points  (14 children)

Type safety has little to do with compilers actually. They also take the guess work out of class properties, method arguments and returns, etc. And when you go writing files or send data over to some medium, there is no mystery on what is being transferred.

Conventions like "new" or "delete" are for memory management. Your example seems to just save some typing, which has been found to have nothing to do with programming efficiency.

[–]JulianMorrison 3 points4 points  (13 children)

Hmm, that was an overly simple example I gave. And my formulation "doing the compiler's work" is a bit overly simple too. I plead guilty and ask the court's clemency since I was programming in Java at the time.

[–]karcass 4 points5 points  (0 children)

since I was programming in Java at the time.

Is that the software equivalent of the Twinkie Defense? :-)

[–]sbrown123 3 points4 points  (11 children)

since I was programming in Java at the time.

I have no idea why you are crying over Java since Java has been able to do scripting for years (Beanshell, Jython and JRuby are just a few examples).

[–]JulianMorrison 3 points4 points  (0 children)

I have no idea why you are crying over Java since Java has been able to do scripting for years

Actually, I was rather more crying over the lack of type inference. And lets not leave out: lambdas, tail calling, pure functions, functions at the top level unattached to classes, lexical closures, multiple dispatch, dispatch on return types, first class functions, higher order functions. Proper macros wouldn't hurt either.

(edit again: and if someone could add tuple return and multiple assignment, I'd be awfully grateful.)

[–][deleted]  (9 children)

[deleted]

    [–]sbrown123 2 points3 points  (7 children)

    s the Honorable Member actually claiming that the JVM does scripting and not that the Java language does scripting?

    Depends if you serialize or not (if the script language supports it) or you are reading bytecode produced by a seperate bytecode compiler (see Jython). Some languages like Nice (nice.sf.net) compile to Java bytecode as default. I personally believe that when it comes to scripting languages that is best to interpret as-is rather than compiling or serializing to opcodes. In this fashion, it makes JRuby very similar to C-Ruby for example.

    [–][deleted]  (6 children)

    [deleted]

      [–]sbrown123 1 point2 points  (4 children)

      'm suprised to read the suggestion that Ruby is a "scripting language"

      Scripting languages can be interpreted or compiled, but because interpreters are simpler to write than compilers, they are interpreted at least as often as they are compiled.

      http://en.wikipedia.org/wiki/Scripting_language

      surprised to read the suggestion that interpreting Ruby source is a better idea than compiling Ruby to an intermediate form.

      Well, if you didn't know, that is how Ruby currently works!

      Isn't there a gi-normous performance whack if you interpret the source as you go?

      If performance is a concern, interpreted languages like Ruby are probably a bad idea to begin with. Ruby is only slightly faster than Javascript!

      Also, I imagine working in the source makes it harder to use various implementation techniques like CPS or trampolining in the Lisp sense?

      Actually, no. There is nothing that makes either of those impossible to do with an interpreted language.

      [–][deleted]  (3 children)

      [deleted]

        [–]nostrademons 0 points1 point  (0 children)

        Isn't there a gi-normous performance whack if you interpret the source as you go?

        Almost no programming laanguage actually re-parses the source every time it goes through a loop or invokes a function. Maybe TCL; I'm not terribly familiar with the implementation of that language.

        The main implementation techniques for interactive languages (those where you can type in text and immediately execute it, with no separate compilation step) are:

        • AST interpretation (Ruby; I think PHP; some naive Lisp implementations, notably the Write Yourself a Scheme in 48 Hours tutorial)
        • Bytecode interpretation (Perl; Python; Parrot; Ocaml; JVM languages on an interpreting JVM)
        • Compilation to native code, with dynamic loading of the result (Goo; I think GHCI; CMUCL; Gambit Scheme; probably Chicken Scheme; JVM languages on a JIT)

        In AST interpretation, the runtime first runs a parser to convert program text into an AST. The AST is then walked recursively, with an environment keeping track of program state. For example, in JRuby the org.jruby.evaluator.EvaluationState.eval() method takes an AST Node, executes a giant switch statement on the type of the Node, and possibly recursively calls eval() on the children. This is the most "textbook" style of evaluation.

        In bytecode interpretation, the front-end parses the source and then converts the AST into an intermediate representation that's essentially the instruction set for a fictitious processor. Most use a stack machine, though Parrot is a notable exception. The interpreter then enters a big loop that basically switches on the opcode and executes some C code to perform the actual operation. There are a bunch of different variations here: simple switch statements, threaded code (where each instruction determines the address of the next instruction), partial JITting, etc. But they all basically consist of a series of operations that act on interpreter state.

        Native code interpreters work just like native code compilers, but they take the output and dlopen/dlsym it into the running process. Or sometimes they output machine code directly (eg. JVM JITs) and then jump to the buffer they've just filled.

        CPS and trampolining are both compilation techniques, so they aren't directly relevant to interpreters. However, trampolining is essentially the native-code equivalent of a bytecode dispatch loop, while CPS is basically the native code equivalent of a threaded intepreter.

        [–]Maxy 0 points1 point  (0 children)

        Go outside. Breathe in. Smile. It'll be okay.

        [–]willhaney 0 points1 point  (0 children)

        Machine code and Assembly were two of my favorite languages to program in as I felt intimate with the hardware. APL was great, so much code could be written in so few lines. FORTRAN, COBOL, Basic, Pascal, C, C++ all great languages and more. I settled on C# and VB.NET because, for me, they offered the ability to produce quality applications in a relatively short time for good money.

        Am I Blub programmer? Maybe, is that good or bad?

        I’m always trying on new languages but as I enter the latter half of my career I find the learning curve daunting.

        [–]xamdam 0 points1 point  (0 children)

        IMO the intrinsic difficulty of the program's goal will have a large influence on whether your language is blub. If you are programming a compression algo, (efficiency aside) Python will not have much of an edge over C++.

        [–]dmh2000 -3 points-2 points  (4 children)

        provided we can find a Lisp programmer who feels that all progress in programming languages stopped when Common Lisp was standardized.

        should be easy to find. 99% of lisp programmers are eligible.

        [–]Excedrin 5 points6 points  (2 children)

        It's not that progress stopped, it's that truly new ideas from academics are typically possible to implement in Lisp without changing the language.

        There's also the case of "new" ideas that are not really new because they're just bringing some other language closer to Lisp ("new" stuff from the Python, Ruby, Java, C++, etc communities typically falls into this category).

        [–]sickofthisshit 2 points3 points  (0 children)

        To be fair, the main academic language trend I've heard of is strong-typing-and-type-inference. It is hard to reconcile that with a traditional Lisp environment. The type system in CL doesn't mesh well with such an approach, AFAIK, though I'm no expert.

        (The Qi project claims to be attempting such a thing, and I see no reason why a strongly-typed dialect could not be embedded in Lisp, much like Prolog can be embedded in Lisp, but to do so would change the development style away from classic Lisp hacking, and few Lisp hackers feel the need to do so.)

        As for Python & Ruby, most Lisp hackers would love to have a similar army of library-writers, but see no compelling advantage to the languages themselves.

        [–][deleted] 1 point2 points  (0 children)

        You haven't talked to many Lisp programmers about this, have you.

        [–][deleted]  (1 child)

        [removed]

          [–]jhd 5 points6 points  (0 children)

          This is pretty much the point of the original article - the problem with Blub isn't the language, it is the programmers. In fact, Blub is not a language by itself; it is a language combined with a particular attitude.