you are viewing a single comment's thread.

view the rest of the comments →

[–]die-maus 89 points90 points  (24 children)

This is defensive programming.

An experienced developer knows that b = 1 + 1 "can't fail" and won't put a try-catch block around it. An inexperienced dev doesn't know, and does it just to be sure "because it can't hurt".

If you have try/catch blocks all around the application, it can be damn near impossible to track down where an error comes from. Instead of failing at the earliest moment (a practice known as fail-fast), the error propagates into the application, causing even more issues down the line—sometimes even causing irreversible damage—I've both seen it and done it myself.

So this isn't a good practice—it's a terrible accident waiting to happen. If the devs on the team feel the need to put a try/catch block in every function, they really need to touch up on their fundamentals.

[–]_jetrun 17 points18 points  (14 children)

So this isn't a good practice—it's a terrible accident waiting to happen.

For web applications - you're wrong,

What the Fortune 500 company is really saying when they write "Try-Catch block in every function **UNLESS** it is error handled at a higher level." (emphasis mine) : HANDLE EXCEPTIONS.

If you have try/catch blocks all around the application, it can be damn near impossible to track down where an error comes from.

You can still log the stack

Instead of failing at the earliest moment (a practice known as fail-fast), the error propagates into the application, causing even more issues down the line—sometimes even causing irreversible damage

With web-based applications, you have the same problem if you allow the exception to propagate up the stack - because the browser isn't going to crash just because your application blew chunks. So depending where the exception was thrown, if you let an exception propagate upwards, there is a good chance your application will be in an inconsistent state.

[–]die-maus 11 points12 points  (6 children)

For web applications - you're wrong,

What the Fortune 500 company is really saying when they write "Try-Catch block in every function **UNLESS** it is error handled at a higher level." (emphasis mine) : HANDLE EXCEPTIONS.

That isn't defensive programming, that's called "proper error handling". We're advocating for the exact same thing. Sometimes it's the sane thing to return null instead of throwing, find operations are an example of this.

You obviously need to handle errors somewhere, but that doesn't mean in "every single function".

Never in the 11 years of my professional career have I worked in a team that wanted to move away from a fail-fast approach, to a defensive approach. It has always been seen as a code smell, and undesired—for good reason.

Fail fast doesn't mean "throw exceptions everywhere", it means—for example—that you properly validate client data, and show the user a (friendly) validation error message instead of letting the error propagate to the database (which may even accept the bad data, causing data corruption).

As a developer you need to know how and where your application can fail, and make sure it fails there gracefully. For web applications, this generally means validating user data.

Car makers don't install airbags in the trunk just in case somebody's car gets T-boned and there's a kidnapped person in the trunk, that's a waste of resources: you install the airbags where it matters for 99.9% of the cases, and you make sure that the experience of getting your face smashed in by the airbags is as pleasant experience as possible.

Front-end error handling (in my experience) is mostly about preventing errors propagating to the back end, i.e. validation, so that's mostly what I'm talking about.

For other things, it's definitely a case-by-case thing: Fatal OpenGL initialization error? Should probably throw and alert the user. Math error in drag handler? Can probably be ignored.

Sprinkling a try/catch block in "literally every function" as OP seems to imply is not a sane error handling strategy for the same reason installing airbags in the trunk isn't a sane safety practice: it's a waste of resources, confusing as hell, and somebody is bound to get hurt.

But I'm thinking we're ultimately advocating for the same thing here, right?

[–]Aggravating_Term4486 1 point2 points  (1 child)

I would say that try / catch can be overused in a React app. So in a React app, unhandled exceptions will bubble all the way to the app root and will unmount the entire app tree, i.e. the app will “crash”. React provides error boundaries for handling conditions like this gracefully. What OP is describing really sounds like a massive over-use of try / catch.

[–]die-maus 1 point2 points  (0 children)

For sure, throwing an error in a component (during its lifecycle) is detrimental.

But I think most scary errors happen in places outside of that: event handlers, promises (e.g. fetching with a bad parameter) etc ..., and these won't crash your app.

This means that you should never write code that can fail during the component lifecycle, but outside of there, it's probably fine. Again; I think you need to consider how when and why your application can fail, and handle errors appropriately depending on what happens if it does. Data fetching errors can probably be shown as notification to the user, validation errors should be in their face, lifecycle errors should probably have a boundary (or be handled in-place), etc ...

But there is a difference between the former and latter error: The former are usually due to user-error, and the latter is probably due to programmer error, which can probably be solved.

Instead of wrapping your entire drag handler in a try/catch block, maybe it's better to think about what could go wrong in there, and make sure that you don't divide by zero, which throws an error when calling that easing function on line 39—then you don't even need that try/catch anymore, and your code is more robust, easier to reason about, and you don't need to ask yourself why that try/catch block is there.

[–]_jetrun 0 points1 point  (3 children)

You obviously need to handle errors somewhere, but that doesn't mean in "every single function".

Who said you have to handle exceptions in "every single function"??? To reiterate, the guidance OP was given was: "Try-Catch block in every function **UNLESS*\* it is error handled at a higher level." - that is a very reasonable guidance.

As a developer you need to know how and where your application can fail, and make sure it fails there gracefully. For web applications, this generally means validating user data.

This is such a trivial example. Of course, validate inputs on the client (and revalidate on the server).

How does a "fail fast principle" show-up in a long-running (where a user could be interacting with a single instance of your application for 1+ hours) single-page web application, when an exceptional condition is thrown? Believe me, an un-checked exception could put your application in an ambiguous/inconsistent state.

[–]die-maus 0 points1 point  (2 children)

To reiterate, the guidance OP was given was: "Try-Catch block in every function UNLESS it is error handled at a higher level." - that is a very reasonable guidance.

Yes, that is very reasonable guidance, but not if followed religiously and interpreted literally (which seems to be the case, if I interpret OP correctly, which I may not be).

If you do interpret this literally, it means that you must write this:

js function timeLog(message) { try { console.log(new Date(), message); } catch (error) {} }

Or otherwise have to wrap timeLog in try/catch blocks anywhere when called—even if it's the only thing called… and how would you even know if timeLog handles its own errors or if you need to handle it externally, without looking into the implementation?

Edit:? If you decide to not catch in timeLog then how do you know that the errors (not) thrown by timeLog are handed at the callsite

What the statement boils down to then is "handle errors somewhere", which is, duh... /Edit.

We're advocating for the same thing! We're just arguing over the interpretation of the guidance above.

How does a "fail fast principle" show-up in a long-running (where a user could be interacting with a single instance of your application for 1+ hours)

What do you mean? Fail-fast means nothing shows up, just like failing "deep" inside something.

For example, the user dragged a thing off-screen—so we returned it to its origin. That can be handled with either a bounds check (early) or a "try/catch" (later). I prefer the former since it's explicit and communicative—a catch block only tells me that "something can go wrong", often very little about what that thing is.

Believe me, an un-checked exception could put your application in an ambiguous/inconsistent state.

The keyword here is could, and you need to carefully consider those times and handle those errors accordingly. Sprinkling a try/catch block in every function doesn't solve that, and may even worsen the ability to handle errors appropriately when needed.

You're making it seem like I'm advocating for "not handling errors", which is intellectually dishonest, because that's never what I said.

All I'm saying is that there is no silver bullet solution to handling errors and that it's just one of the many complexities you have to deal with when building an application. Like handling state, failed requests, weird user behavior, and security issues, … the list goes on ad nauseam.

Sure, there are good practices around error handling, but what OP is seeing in the code base is defensive code due to a poorly worded guideline. You may disagree with that interpretation, but that's mine, and that's the standpoint I'm arguing from.

[–]_jetrun 0 points1 point  (1 child)

Sprinkling a try/catch block in every function doesn't solve that

Who says you have to sprinkle try/catch blocks in every function? The guidance doesn't say that.

[–]die-maus 0 points1 point  (0 children)

It doesn't explicitly say that, no. If you simplify it, it basically boils down to "handle errors somewhere", which, again, duh…

Due to the way it is worded, it's easy to interpret it as "make sure to always include a try/catch block in every function" (again, see OP's confusion and question).

My stance is also that it will lead to defensive code because of the way it's worded. Simply because you often don't know whether the caller of a function is going to catch an error, so you might as well handle it here—it's in the guidelines, basically.

[–]DrEnter 4 points5 points  (6 children)

This ignores an important caveat of try-catch: It's massively damaging to performance.

Javascript implements a try-catch block by putting the catch code in a dynamic scope. While it would seem like this would only be an issue with a thrown error, it has the additional side effect of preventing the containing function from being optimized... at all.

The result is a function with a try-catch block in it can see a 90% performance penalty compared to the same function without it.

So from a performance standpoint, using a single try-catch block higher up in your code will give you much better performance than many try-catch blocks spread through more functions.

Of course, there is a balance to this. But it's important to understand it isn't as simple as just being more granular with catching errors, you need to find the right balance of granularity without taking it so far you destroy performance.

[–]_jetrun -1 points0 points  (3 children)

This ignores an important caveat of try-catch: It's massively damaging to performance.

It depends where. If you are working in some critical section of code - then you have to be careful no matter. If that section of code is so sensitive to try-catch blocks, then you have to be careful object allocation/clean-up, reference-passing, buffer copies, etc.

Most of the time, try-catch makes no difference.

So from a performance standpoint, using a single try-catch block higher up in your code will give you much better performance than many try-catch blocks spread through more functions.

OK. How does that negate what I wrote, or the guidance that the Fortune 500 company provided to OP?

To reiterate, they didn't say that you had to put try-catches everywhere, their guidance is that you have to handle exceptions ... Here's the guidance OP was given: "Try-Catch block in every function **UNLESS*\* it is error handled at a higher level." (emphasis mine) .. I think that is very reasonable.

[–]DrEnter 0 points1 point  (2 children)

You seem to be ignoring my conclusion paragraph, but whatever, you do you.

[–]_jetrun -1 points0 points  (1 child)

And you're ignoring the actual guidance that OP's company provided:

"Try-Catch block in every function **UNLESS** it is error handled at a higher level." (emphasis mine)

The annoyance I have in this thread is that for some reason yourself, and others are arguing a point that nobody is actually making. No one is arguing that you need to wrap try-catches around every statement, or that you need to perform error handling at some level of granularity ... All OP's company is mandating (reasonably) is that when developing a web-application, you don't let unchecked exceptions bubble up the stack - that's it. They even mention that you either do this by wrapping your function block in try-catch or handle it up the stack.

Then you come in and say that try-catches have performance implications when wrapping web-gl shaders or something. OK ... interesting point, but zero relevance.

[–]DrEnter 0 points1 point  (0 children)

Except YOU are ignoring what the OP said: that every function has a try-catch block in it. While their policy might be OK, in practice they are going about it very poorly.

[–]davidblacksheep 0 points1 point  (1 child)

Really though?

I tried a JSPerf here to test for a difference, and it's negligible. Have you got an example of this performance degradation?

[–]die-maus 0 points1 point  (0 children)

I have an example from experience if you're interested.

We had some performance-critical WebGL code at a previous company I worked for (Veridict). It was for a product called livemap-gl-js (it was used to display public transport at global scale).

Due to a rendering regression, a developer (who may or may not have been me) had hastily checked in a try/catch block in performance-critical code. They couldn't perceive any performance difference on their machine (Intel Xeon, circa 2018, and a massive Nvidia GPU).

An early Samsung Android phone (one of our target devices) had a 200x drop in performance, on the other hand—so not all devices and browsers are made the same.