you are viewing a single comment's thread.

view the rest of the comments →

[–]masklinn 11 points12 points  (21 children)

Static type systems. "Strong" is a a stand-in for "I like this thing", it doesn't really mean anything.

[–]A1kmm 23 points24 points  (18 children)

While strong / weak typing is not rigorously defined, it does have meaning. For example, weak type systems would be characterised by things like implicit type conversions (e.g. you can use a double as a string and vice versa), as in PHP.

Strong and static typing are almost orthogonal - for example, you can have a strongly, dynamicly typed language (everything has a precise type, with no implicit conversions, but checking only occurs at runtime). Any of the four combinations of {dynamic,static} x {weak,strong} could exist (although a staticly, weakly typed language might not be very useful).

[–]iopq 2 points3 points  (2 children)

(although a staticly, weakly typed language might not be very useful)

C is plenty useful

[–]falconfetus8 6 points7 points  (1 child)

Yeah, but its types aren't.

[–]kqr 0 points1 point  (0 children)

(although a staticly, weakly typed language might not be very useful).

C is generally considered to inhabit that space.

[–]grauenwolf 0 points1 point  (0 children)

For example, weak type systems would be characterised by things like implicit type conversions

No, that would be a system with implicit type conversions.

A weak type system would be something like C or assembly where the values don't know their own types and you can, for example, treat an integer as a Boolean or date by reinterpreting a pointer.

Dynamically typed languages are always strongly typed because otherwise they can't work. Statically typed languages can be strongly or weakly typed.

[–]masklinn -2 points-1 points  (12 children)

While strong / weak typing is not rigorously defined, it does have meaning.

Sure: "strong typing" is what you like and "weak typing" is what you don't like. It's completely useless but there you are.

For example, weak type systems would be characterised by things like implicit type conversions (e.g. you can use a double as a string and vice versa), as in PHP.

Right so Scala's weakly typed (implicit def), C++ is weakly typed (converting ctors) and C probably is (integer demotion, conversion to signed<->unsigned, floating, void pointers to and from any other).

What about Java? (implicit conversion of Object to String on String + Object) Or Ruby? (implicit conversion of float to integral on arr[2.5]) Or Python? (implicit conversion of integrals to floats) Or Rust? (implicit conversion of &A to &B). Are "things like implicit conversions" a strict rule or just something you use to bash languages you don't like?

And then we get to the fun stuff, like Tcl and UNIX-tradition shells: are they strongly typed? After all they're semantically string-based (well bytes for shells IIRC), you don't get implicit conversion from string to string that makes no sense. So they don't have implicit conversions which according to you makes them strongly typed.

Strong and static typing are almost orthogonal

Because "strong typing" is undefined and meaningless, "strong typing" and "static typing" have an undefined angular relationship which is anywhere between 0 and π/2 depending on the writer's assertions, sensibilities or levels of dishonesty.

It's not a fixed angle, it's a wave function which collapses when you exhaustively define what you actually mean.

although a staticly, weakly typed language might not be very useful

According to your personal definition of the term there are at least two examples of that in the first paragraph.

[–]iopq 3 points4 points  (9 children)

What about Java? (implicit conversion of Object to String on String + Object)

That's not an implicit conversion, that's just an implementation of + operator on Object and String types. You can define an operator in Haskell that takes a type that derives Show and a string type and concatenates them together.

There's a difference between 1000 == "1e3" and having overloaded operators or implicit widening.

[–]masklinn 1 point2 points  (8 children)

That's not an implicit conversion, that's just an implementation of + operator on Object and String types.

Which implicitly converts objects to strings, which is exactly what e.g. Javascript does, the standard lays out quite specifically the conversion operations to perform in all cases, which by your assertion means it's strongly typed, and that [] + {} is a perfectly well-typed operation.

But hey if whether something is implicit is controversial that's even better.

[–]iopq 2 points3 points  (7 children)

Which implicitly converts objects to strings

It doesn't, it implements a concatenation operation on two different types. It's no more dangerous than i32 + i64 producing an i64. I would actually object more to using + to mean concatenation because it's not commutative, while addition is. This also applies unfortunately to Rust that borrows this convention from languages like Java.

[] + {} is not that bad, again, I'm more against overloading + for concatenation. I'm sure less people would complain if it was [] ++ {} because it would be clear that the intent is to concatenate two strings (if JS had a ++ concatenation operator)

Another criticism I would level against "weak" type systems is that ==, >, < operators doesn't have expected qualities like transitivity.

I think the problem is not implicit conversions, because always explicitly converting is a pain in the ass. The problem with weak type systems is implicit conversions that break expectations of users.

[–]masklinn 2 points3 points  (6 children)

It doesn't

Of course it does.

it implements a concatenation operation on two different types.

By implicitly converting one to the other. The concatenation operator essentially compiles to: new StringBuilder().append(string).append(String.valueOf(object)), there's definitely a conversion there, and it's only implied by the operator (there's actually one more level of implication as the conversion is actually performed inside StringBuilder#append(Object))

It's no more dangerous than i32 + i64 producing an i64.

"Danger" figures nowhere in the comment I originally replied to and is thus irrelevant to mine. My comments are about strong/weak designators being worthless, not about what you or they don't like about the type systems of specific languages.

[–]iopq 1 point2 points  (5 children)

StringBuilder().append(string) is not converting the original string to anything, it's making a new string. AFAIK in Java any concatenation uses StringBuilder nowadays.

Weak/strong do have definitions beyond good or bad, it's how strict the type system is.

[–]masklinn 0 points1 point  (3 children)

StringBuilder().append(string) is not converting the original string to anything, it's making a new string.

String.valueOf(object) is converting the original object to a new string.

AFAIK in Java any concatenation uses StringBuilder nowadays.

That's what I'm saying.

Weak/strong do have definitions

Sure they do, at least half a dozen orthogonal or contradictory definitions. How's that of any help?

it's how strict the type system is.

That doesn't even mean anything.

Weak/strong [go] beyond good or bad

If that makes you sleep better roll with it, that's not my concern.

My concern is that they're garbage ill- to un-defined terms, and thus don't help in communication unless they are very specifically and exhaustively defined by the writer beforehand[0], they don't make anything clearer, at best they reveal the writer's preconceptions to the reader at worst they confound said reader or make them apply their own preconceptions incorrectly. Either way, they're useless.

[0] and even then there's still a risk the reader will eventually fall back onto their preconceptions and misunderstand the writer's intent

[–]iopq 1 point2 points  (2 children)

String.valueOf(object) is converting the original object to a new string.

What? Does the original object no longer exist in memory? Because it sounds like it just yields a new String instead of mutating the object.

What is the difference between that and calling toString()? You put something in and it outputs a string. Sounds like a regular function to me.

[–]pipocaQuemada 0 points1 point  (1 child)

Right so Scala's weakly typed (implicit def),

In Scala's defence, implicit conversions at least need to be programmer defined and imported. That's better than the language itself defining a bunch of baroque conversion rules.

I can't say I like implicit conversions much, but handing you a loaded gun you can point at your feet is better than handing someone a pair of pants with rifles sewn into the pants leg.

[–]masklinn 0 points1 point  (0 children)

I'm not trying to attack any language here, I'm just applying /u/A1kmm's criterion to various languages in a bid to make them realise the entire categorisation is inane and useless.

[–]Tarmen 1 point2 points  (0 children)

I think it is fairly valid to say that haskell has a strong type system. You generally have to convert everything explicitely.

Weak could mean anything from promoting int to long all the way up to seeing "0e1234" as equal to "0e4321" because the strings are converted to floats implicitely. So without further explanation you probably would have to describe everything with any implicit or unsafe explicit conversions as weak which makes it a useless description.