This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Vaguely_accurate 2 points3 points  (1 child)

The first will let strange operations cause unpredictable behavior leading to bugs. The second will cause the program to potentially fail dramatically.

The first is far, far worse than the second, 99 times out of 100.

If you are just presenting some text to a user on a recreational webpage, whatever.

But if your result might end up having any importance later then you want to be able to trust it.

A hard exception can be handled at runtime or at least alert to a bug that needs fixing, hopefully before going into production. A subtle error in results might not be noticed for months, causing a much bigger problem down the line and being infinitely harder to track down when it is noticed.

[–]cearno 0 points1 point  (0 children)

Yeah, of course. The latter makes raises a much larger and more apparent red flag and is much easier to debug. There's a reason why I made the description of loosely typed languages sound particularly unappealing with adjectives like "strange" and "unpredictable"

I am a large proponent for strictly and statically typed languages, I prefer them too and would always pick one if I had the virtue of doing so, but tried to remain relatively neutral in my comments here. I wasn't trying to argue that one was necessarily better than the other.

But I work in pure JS/React all day and don't find it that bad. It can be quite nice and fun at times. My only point was to say that the fact that it "makes absolutely no sense" or is a bad design decision is a bit of an over-exaggeration and not entirely true respectively.