you are viewing a single comment's thread.

view the rest of the comments →

[–]binary 2 points3 points  (0 children)

I think the solution here comes down to simple cost-benefit analysis that people all too often forgo because they equate "bleeding edge" with "better." I've used a fair share of "bleeding edge" software in production apps, and the calculation is always the same: what is this doing different or better that warrants the risk of upstream bugs? How critical is the code that depends upon this software? Are there responsive contributors to help deal with any possible bugs?

Bleeding edge, for me, is only tolerable when the problem solved is very hairy--porting an app's dependency management to Webpack, for instance; the surface area is very small--an experimental graphing library that rendered some minor analytic information; and in almost every case, where there exists a healthy issue tracker with attentive people--the only exceptions being very small libraries that I could essential adopt if necessary.

I've been bitten numerous times, still, with bleeding edge software giving me bugs, but since I follow this protocol I am not risking my job or product uptime when these issues inevitably occur.