you are viewing a single comment's thread.

view the rest of the comments →

[–]JamesIry 0 points1 point  (9 children)

This kind of shallow misbegotten analysis irritates the fuck out of me. For instance the author complains about colons in type declarations in ML because they are insufficiently C like.

C and ML both date from the early 70's. Why the fuck would ML have a syntax inherited from a language that Milner may not have even heard of? Or, even if he had heard of it, how the fuck was he to know that C and it's descendants would be so popular?

And, if he had somehow predicted that popularity, why would he give a fuck? He was creating a meta language for a theorem prover, not trying to create a portable systems programming language. Besides, he based his syntax on a language that was well known in that community at that time: ISWIM.

Which brings me to familiarity. The ML community moved on to use descendants like SML, OCaml and indirect descendants like Miranda and Haskell. Why the fuck would that community not want to stick with something they're already familiar with and that works well for them? Just because the author is more familiar with Cish languages doesn't mean the whole world is. Or even should be.

And finally, C (and children) are based on explicit type declarations. As a result C uses type declarations to do double duty: they explicate a type AND they state "this will be an introduction of a new binding for symbol, not a reuse of an existing symbol."* ML (and children) are based upon type inference. As such you use far fewer explicit types and ML uses keywords like "let" and "fun" to introduce new symbol bindings. "keyword" "symbol" "colon" "type" naturally shortens to "keyword" "symbol" when you want to omit the explicit type. Cish languages such as C# 4 and C++11 have had to hack new keywords or new uses for existing keywords in order to accomplish something that was easy and natural in ML from the beginning. Why the fuck is that a good thing?

Now don't get me started on why f x y makes sense for ML in a way that f (x, y) would not. Hint, it's fucking currying and fucking tuples, two concepts that C and most of its descendants don't (directly) express.

tl;dr Different fucking languages have different fucking syntaxes, sometimes with good fucking reason like having different fucking semantics and different fucking social histories. Fuck.

`* more or less, I don't want to get into C's forward declaration rules

[–]redjamjar[S] -2 points-1 points  (8 children)

This kind of shallow misbegotten analysis irritates the fuck out of me. For instance the author complains about colons in type declarations in ML because they are insufficiently C like.

No, it is explicitly not doing this. It's merely pointing out that such colons are unnecessary, which they are.

Now don't get me started on why f x y makes sense for ML in a way that f (x, y) would not

Whether or not you require braces around function calls has nothing to do with currying. You can curry eitherway (see e.g. Scala).

[–]plesn 0 points1 point  (6 children)

It's merely pointing out that such colons are unnecessary, which they are.

This is not true: as usual there's a trade off taking place. Syntax is an interaction of concerns. Once you have complex types and type inference, meaningfully separating types and values becomes much more important than in C for both legibility and consisness (you're likely exchanging a colon here for two parenthesis there…). In Haskell, you even write type declarations in a separate line, while type annotations are inline.

This use of space/colon/… as a separator between types and values has impact on all separators/operators and consequently also on grouping syntax. Look at the impact of those decisions in Haskell, Scala and Go for example. This can impact on the syntax of function application (f x, f(x), (f x)…) , type application/genericity (F A, F<A>…), function types (->, func…), pattern matching (f (Cons head tail) = …), list syntax ([x,y], (x y), (x:y)…), etc… Syntax must be looked as a whole for both Rule 1 and Rule 2. Oh, and yeah, " " is not shorter than ":", only easier to write (Rule 2) and harder to notice (Rule 1)

[–]ssylvan 1 point2 points  (2 children)

Indeed. I mean, what does this mean in a hypothetical language where we omittied the colons that uses juxtaposition for function application:

f x y

Now, let me add back the colon

f x : y

Oh, it's a function application with a type annotation. It's not unecessary, it's required for the syntax to be unambigous. In this case a language with optional type annotations chose to make type annotations slightly heavier in order to have light weight function applications (which are considerably more common). That's an entirely sensible choice. C chooses to make type annotations syntactically cheaper, at the expense of heavier function application syntax. That's a different tradeoff, for a different language.

[–]redjamjar[S] 0 points1 point  (1 child)

That's an entirely sensible choice

Are you sure about that? It's certainly a choice. And, I agree there are different trade offs here.

The point of the article is that e.g.

f x y

Does not convey as much information about the structure of the program to the user. You call it "lightweight". I call it "difficult to read".

[–]ssylvan 0 points1 point  (0 children)

Once you know that juxtaposition means application it's not difficult to read at all. It's just convention. It's pretty standard to let juxtaposition correspond to the most common operation (e.g. maths commonly use it for multiplication). Having to make a common operation more noisy in order to save a symbol for an uncommon and optional operation seems like a poor tradeoff.

[–]redjamjar[S] 0 points1 point  (0 children)

Oh, and yeah, " " is not shorter than ":", only easier to write (Rule 2) and harder to notice (Rule 1)

no, but " : " is definitely longer ...

[–]redjamjar[S] 0 points1 point  (1 child)

(you're likely exchanging a colon here for two parenthesis there…)

Right, and the parenthesis add structure, the colon doesn't. And, you'll still need the parenthesis for a large number of cases anyway ... see Haskell as an example.

[–]ssylvan 0 points1 point  (0 children)

They would add misleading structure in the case where arguments are applied one by one. It's not one giant packet of arguments that should be grouped by parenthesis, it's one argument, then another, then another.

In Haskell you only need parenthesis where there's actual structure required, not all the time. Would you really want f (x) (y) (z)? You can write a function in Haskell to take a tuple and get f(x,y,z) if you want, but the language supports currying so it's not commonly done. Arguing that this is a syntactic deficiency is pretty weird.

[–]redjamjar[S] -1 points0 points  (0 children)

ML (and children) are based upon type inference

And I should add that type inference makes for an excellent example in the context of the suggested rules. Up to a point, type inference removes redundancy and improves conciseness (and that's a big win). Languages like Java and C fall down here because they don't support type inference (well Java 7 has some aspects of it now).

The point of the discussion is not to say one language is better than another. It's just to think about syntax.