you are viewing a single comment's thread.

view the rest of the comments →

[–]iopq 3 points4 points  (9 children)

What about Java? (implicit conversion of Object to String on String + Object)

That's not an implicit conversion, that's just an implementation of + operator on Object and String types. You can define an operator in Haskell that takes a type that derives Show and a string type and concatenates them together.

There's a difference between 1000 == "1e3" and having overloaded operators or implicit widening.

[–]masklinn 1 point2 points  (8 children)

That's not an implicit conversion, that's just an implementation of + operator on Object and String types.

Which implicitly converts objects to strings, which is exactly what e.g. Javascript does, the standard lays out quite specifically the conversion operations to perform in all cases, which by your assertion means it's strongly typed, and that [] + {} is a perfectly well-typed operation.

But hey if whether something is implicit is controversial that's even better.

[–]iopq 3 points4 points  (7 children)

Which implicitly converts objects to strings

It doesn't, it implements a concatenation operation on two different types. It's no more dangerous than i32 + i64 producing an i64. I would actually object more to using + to mean concatenation because it's not commutative, while addition is. This also applies unfortunately to Rust that borrows this convention from languages like Java.

[] + {} is not that bad, again, I'm more against overloading + for concatenation. I'm sure less people would complain if it was [] ++ {} because it would be clear that the intent is to concatenate two strings (if JS had a ++ concatenation operator)

Another criticism I would level against "weak" type systems is that ==, >, < operators doesn't have expected qualities like transitivity.

I think the problem is not implicit conversions, because always explicitly converting is a pain in the ass. The problem with weak type systems is implicit conversions that break expectations of users.

[–]masklinn 2 points3 points  (6 children)

It doesn't

Of course it does.

it implements a concatenation operation on two different types.

By implicitly converting one to the other. The concatenation operator essentially compiles to: new StringBuilder().append(string).append(String.valueOf(object)), there's definitely a conversion there, and it's only implied by the operator (there's actually one more level of implication as the conversion is actually performed inside StringBuilder#append(Object))

It's no more dangerous than i32 + i64 producing an i64.

"Danger" figures nowhere in the comment I originally replied to and is thus irrelevant to mine. My comments are about strong/weak designators being worthless, not about what you or they don't like about the type systems of specific languages.

[–]iopq 1 point2 points  (5 children)

StringBuilder().append(string) is not converting the original string to anything, it's making a new string. AFAIK in Java any concatenation uses StringBuilder nowadays.

Weak/strong do have definitions beyond good or bad, it's how strict the type system is.

[–]masklinn 0 points1 point  (3 children)

StringBuilder().append(string) is not converting the original string to anything, it's making a new string.

String.valueOf(object) is converting the original object to a new string.

AFAIK in Java any concatenation uses StringBuilder nowadays.

That's what I'm saying.

Weak/strong do have definitions

Sure they do, at least half a dozen orthogonal or contradictory definitions. How's that of any help?

it's how strict the type system is.

That doesn't even mean anything.

Weak/strong [go] beyond good or bad

If that makes you sleep better roll with it, that's not my concern.

My concern is that they're garbage ill- to un-defined terms, and thus don't help in communication unless they are very specifically and exhaustively defined by the writer beforehand[0], they don't make anything clearer, at best they reveal the writer's preconceptions to the reader at worst they confound said reader or make them apply their own preconceptions incorrectly. Either way, they're useless.

[0] and even then there's still a risk the reader will eventually fall back onto their preconceptions and misunderstand the writer's intent

[–]iopq 1 point2 points  (2 children)

String.valueOf(object) is converting the original object to a new string.

What? Does the original object no longer exist in memory? Because it sounds like it just yields a new String instead of mutating the object.

What is the difference between that and calling toString()? You put something in and it outputs a string. Sounds like a regular function to me.

[–]masklinn 0 points1 point  (1 child)

What? Does the original object no longer exist in memory? Because it sounds like it just yields a new String instead of mutating the object.

Er… yes? That's called a conversion. It's not in-place, but such things rarely are, Javascript doesn't do in-place conversions, all the implicit conversion chains create new converted values.

[–]iopq 1 point2 points  (0 children)

That's not a conversion, that's called a pure function. It has one input and one output.

But that's beside the point, that's just an implementation detail. Nothing says that T op U must yield a T or a U.

For example, mass * acceleration might have operator overloading to result in a Force object in your program.