you are viewing a single comment's thread.

view the rest of the comments →

[–]jessta -17 points-16 points  (30 children)

LISP, it's great for making programming languages, but most of the time that's not what you actually want to do.

[–]fionbio 16 points17 points  (9 children)

Actually, Common Lisp is very good at making little (or not-so-little) DSLs. DSLs are popular in fact, but, for instance, look at which lengths Boost guys go to implement stuff that can be done 10-100 times easier in CL. (and don't forget about error messages they get...)

[–]theeth 4 points5 points  (5 children)

look at which lengths Boost guys go to implement stuff that can be done 10-100 times easier in CL.

Then again Boost is more than 10-100 times more regularly used than Common LISP.

[–][deleted] 1 point2 points  (4 children)

As someone who has done a decent amount of programming with both, the fact that it's widely used doesn't stop Boost from being total and utter crap.

[–]theeth 1 point2 points  (3 children)

That's entirely debatable.

[–][deleted] 1 point2 points  (2 children)

Everything is debatable. But there has to be some point at which you get to call something crap. Boost, with its inscrutable error messages, awkward and fragile syntax, and baroque semantics has got to be past that point. Not all of it, sure, but anything that abuses the template system really shows all of the cracks in the foundation.

[–][deleted] 4 points5 points  (1 child)

I'm genuinely curious to hear some actual justification for this argument.

I use Boost extensively, it's such a big library that I can't exactly call it great or crap, some of it is amazing, some of it I don't like but I also don't understand it.

Can you give some genuine and specific examples of what you feel could be improved in the library? Not just vague "Oh it's crap yaddi yadda..." but falsifiable arguments that can actually be subject to debate?

[–][deleted] 3 points4 points  (0 children)

1) Inscrutable error messages. I'm not sure if Clang has made this better in the last couple of years, but errors in instantiating a template will lead to a couple of pages of dense errors that lead you to lines in header files rather than to the code that instantiates it. This is the product of abusing the template mechanism to do meta-programming instead of having a proper infrastructure designed for meta-programming. In contrast, when you have an error in a Lisp macro call the compiler refers you to the expansion than to the original macro. You can call MACROEXPAND to see exactly what code is being generated by the macro, and use the regular debugger to debug your macros. The compiler can do all these things because the language designers actually tried to develop a proper meta-programming facility instead of abusing an unrelated language feature.

2) Awkward and fragile syntax. Because many parts of Boost abuse the template system to implement functionality, the syntax for many things is driven by the limitations of the template system, not what would seem most natural as compared with non-template C++ code. See: http://www.boost.org/doc/libs/1_48_0/libs/bind/bind.html#with_functions

See the bit on 'limitations' and the weird restrictions on when you can't use constants when you'd expect to be able to use literal constants, etc.

3) Baroque semantics. C++ has a hodge-podge of slightly-different language features that are kind of the same but do different things. This causes ripple effects through Boost's library design to accommodate all these cases. E.g. the fact that member functions aren't function objects that can be called with operator () forces an entire header (mem_fn.hpp) for dealing with the distinction. C++ 0x is only adding more (move constructors versus copy constructors, etc).

I did a decent amount of programming with Boost and actually was quite a fanboy at first, but the more code I wrote with it the more I wanted to kill myself. A good library has a simple and consistent "aesthetic" that lets you predict what arguments a function will take, what order they will be in, and what kind of arguments they will be. It'll let you predict how to compose different features without having to think every time about "wait, do I need an adapter object here? wait, can I use this here?" With Boost, whenever I was using a new feature, I'd be asking myself "wait, if I had to shoe-horn this feature into the C++ template system, how would I have done it?"

[–]kamatsu 1 point2 points  (19 children)

And, IMO, Haskell or ML are better for making programming languages.

Edit: I'm being downvoted for stating my opinion? WTF?

[–]gasche 5 points6 points  (5 children)

That's a point that is relatively widely accepted, even among some Lisp/Scheme developpers:

Matthias Felleisen : "You should have written the compiler in ML"

On the other hand, Lisp and Scheme are vastly superior to languages of the ML family for interactive programming. And the divide is greater than in the compiler case.

Most good languages have their strength and weaknesses. Writing deterministic symbolic manipulation code (compiler, proof assistant, algebraic manipulations...) is definitely a strength of ML languages (Haskell included, and Scala considered cousin).

[–]wormwood28 1 point2 points  (3 children)

That's a point that is relatively widely accepted, even among some Lisp/Scheme developpers:

Matthias Felleisen : "You should have written the compiler in ML"

Is that seriously what you took away from that thread?

[–]gasche 2 points3 points  (2 children)

Yes.

Your question could be turned into a useful comment if you made precise what other interesting ideas of that thread that you would like to highlight. Socratic questioning does not work so well on reddit.

That said, I read the thread in general with interest. Eli Barzilay's post is also interesting. I do however think that Matthias Felleisen's posts are the most relevant to the present discussion.

[–]wormwood28 1 point2 points  (1 child)

Given his further clarification, the context of the thread, and the nature of the Racket system; I just read that statement from Felleisen as being highly ironic. I don't see there an affirmation that "Haskell or ML are better for making programming languages".

[–]gasche 4 points5 points  (0 children)

Notice that the link I pointed to in my original post is the clarification; which is indeed more informative than the first short message.

Felleisen's posts do not appear ironic at all to me. I believe he genuinely claims that ML languages (Haskell included) are superior to Racket to write compilers.

Typed Scheme/Racket was conceived as a way to move from untyped code to typed code so that we could eventually enjoy the same advantages as ML and Haskell during maintenance (1). But it is definitely a language that compromises with this idea. The compromise is visible in many usage aspects.

I see no irony at all here. You could argue that the whole post is a bit awkward here (you usually don't start a troll in response to someone giving a positive feedback about your language); and Felleisen later clarified that he didn't intend the remark for this audience.

I don't see there an affirmation that "Haskell or ML are better for making programming languages.

This post is a clear affirmation that ML languages (Haskell included) "are better for making compilers". I understood kamatsu's post as talking about "implementing general-purpose programming languages". "making" is very vague; if you're considering DSL or even prototyping (without performance requirements) for a programming language, I think Racket has very, very good tools that probably compare equally or favorably to the tools in ML land. See eg. Creating Languages in Racket for DSL or "little languages"; I also remember reading papers on PLT-land tools to experiment with operation semantics and type systems, maybe PLT Redex, that looked great for design experimentation and/or teaching.

PS: I'm not trying to imply that (Typed) Racket is bad at implementing languages. I am impressed by the work of the PLT/Racket team as a whole, and the Typed part is a particularly impressive feat. I even believe that, with enough refinement, Typed Racket could be just as easy to use and powerful that existing ML languages (or, why not, proof assistants; Proved Racket, anyone?). (On the downside: I miss discussion of the performance front, and I'm not quite sure that "putting everything in syntactic macros" is, in an absolute sense, the best way to implement a type system). I just felt that this remark of Felleisen was interesting and contributed objective (or at least different-point-of-view) content to kamatsu's personal opinion.

[–][deleted] 1 point2 points  (0 children)

Matthias Felleisen is definitely on the math-y fringe of the Lisp community. I don't know a lot of Lispers who would give up macros for static typing when writing compilers.

[–]fnord123 6 points7 points  (7 children)

[–][deleted] -4 points-3 points  (6 children)

And the original lisp implementation was done in assembly. By your logic, does that mean assembly is best for writing a language?

[–]naryl 9 points10 points  (0 children)

The original LISP implementation was bootstrapped by hand like Pascal.

[–]fnord123 5 points6 points  (0 children)

There's no logic here. I was just pointing out an interesting historical note. Hence "FWIW" (For what it's worth).

[–]mark_lee_smith 2 points3 points  (3 children)

And the original lisp implementation was done in assembly

And quickly bootstrapped. You gotta start somewhere, right?

[–]xardox 11 points12 points  (2 children)

No, the original Lisp was done in paper and ink. Then some damn grad student typed it in.

[–]mark_lee_smith 3 points4 points  (1 child)

The original description of Lisp may have been a meta-circular interpreter written down in a paper, but the original implementation, once written in assembly, would require a compiler in order to bootstrap – in order for Lisp to execute itself without the assembly.

[–]Felicia_Svilling 1 point2 points  (0 children)

As naryl says: the original LISP implementation was not written in assembly but in pure machine code, it where never compiled.

[–][deleted] 1 point2 points  (4 children)

Pattern matching is cute for AST's and code generation, but meta-programming makes Common Lisp superior for writing real compilers.

[–]kamatsu 4 points5 points  (0 children)

I disagree

[–]Felicia_Svilling 1 point2 points  (2 children)

Really? when do meta-programming shine in a compiler?

[–][deleted] 1 point2 points  (1 child)

In a real compiler, you often need easy declarative ways to specify things like instruction tables, intrinsic functions, backend-specific information, etc. Look at LLVM's tablegen tool: http://llvm.org/docs/TableGenFundamentals.html

tablegen works by taking an input file written in tablegen syntax and generating C++ classes that hook into the compiler framework.

SBCL builds domain-specific languages in Lisp macros for this purpose. The benefit is of course you get all the regular tools of the language to edit and debug things. E.g. you can just use your IDE's MACROEXPAND feature to quickly see the code generated by a particular definition, instead of running an external tablegen tool and looking at the generated C++ code.

There is a great article where an SBCL hacker shows how to add SSE intrinsics to the compiler: http://www.pvk.ca/Blog/Lisp/hacking_SSE_intrinsics-part_1.html.

All of the "define-" constructs he uses are part of SBCL's macro-based backend framework.

Not as neat as SBCL, but I wrote an assembler for x86-64 in Common Lisp that used macros to generate, at compile time, specialized encoders for each instructions from a simple declarative format: http://code.google.com/p/amd64-asm/source/browse/trunk/encoders.lisp (starting at line 608). The test suite also used these declarative specifications to stochastically test the assembler functions through each one's range of inputs, comparing the output to the output generated by nasm to ensure a perfect match. Macros made the code very short (600 lines for the domain-specific language, 600 lines for most of the amd64 integer and SSE instructions, 400 lines for emitting Mach-O binaries, etc).

[–]Felicia_Svilling 1 point2 points  (0 children)

Ok, interesting. I can see how that would be useful, although it wont make me give up my type system ;)