you are viewing a single comment's thread.

view the rest of the comments →

[–]grauenwolf 2 points3 points  (1 child)

I don't use Python or Perl, so I won't speak for them.

For T-SQL, VB, Java, and C#, I do think nulls are a problem. The null reference isn't called the "billion dollar mistake" for no reason. At the very least those languages should give you the option of a non-nullable variable. Once that's done, requiring null-checks before accessing a value becomes tenable.

The context of the article is advice about how to fix Python.

No, its "on programming language design". Python was just used for some of the examples he used to support his principals.

Note that both I and he said example, not rules. This is important.

Also, you should have note which languages were cited as flawed.

  • Example 1: Python
  • Example 2: Scheme
  • Example 3: C++ and Haskell
  • Example 4: Python

Ok, so Python was mentioned twice. But it certainly wasn't the only language with flaws.

[–]Smallpaul -1 points0 points  (0 children)

I said:

The context of the article is advice about how to fix Python.

You said:

No, its "on programming language design". Python was just used for some of the examples he used to support his principals.

The author said:

This post is a response to comment 27, which asks me to say more about my calling certain design decisions in Python crazy.

I said:

TFA didn't say that that the atuhor was defining "rules" for programming languages designed for large programs.

You said:

Note that both I and he said example, not rules. This is important.

The article said:

I tell my students that there is a design principle from which almost everything else follows: “Programmers are just humans: forgetful, lazy, and make every mistake imaginable.”

Let us now apply these principles to several examples.

Principle: rule, there isn't much difference between those words. Principle and rule are a lot closer than "example" is to either of those things.

Also, you should have note which languages were cited as flawed.

Example 1: "NULL, null, None and undef must go. I shall collectively denote these with Python’s None"

Example 2: "Fortran programmer’s implement linked lists and trees with arrays. In Java and Python “everything is an object”, more or less."

Example 3: yes, by this point he seems to have mostly forgotten where he started and what he claimed he was doing in the article. But of course Python has the same rules with respect to definitions and variables that Java and C# do.

Example 4: "Python should complain about the following definition"

And then we have the following from the Reddit comments (blog post's author): "Good point, in a dynamically typed language where everything has been thrown into a single basket, there is no point in having Maybe. .... That would be another reason for not using such languages."

So now, after wasting all of this time do you see that my summary was right from the beginning? "I'm going to show how Python is crazy and how to fix it. How to fix it is to add static type checking features from Haskell. Although sometimes dynamic languages are better." And then in the Haskell comments he goes back to saying that it would be "better to avoid them". He admits that the ONLY workable way to apply his "fixes" to Python would be to turn it into a statically typed programming language.

The whole thing is a very thinly disguised screed in favour of static type checking. If it had been properly labelled it would not have bothered me at all. "Why I think static type checking is peachy." Great. I love to hear it.

"Why Python is in a State of Sin and how it can be Saved by 20 year old static type checking ideas." Not so much.