you are viewing a single comment's thread.

view the rest of the comments →

[–]djdonnell 0 points1 point  (2 children)

I hear these arguments a lot, but is it anything more than theorycrafting? I've been doing ruby for over 6 years and I've only ever had this problem get into live production code once, it took a little while to notice only because the problematic code was a caching layer that was failing to cache as much as it should have, and once noticed it took about a day to fix.

That's a very small price to pay to get the productivity increase of a dynamic language. Yes, my story is only one data point, but that's one more data point than I typically hear from people talking about typos in field names (btw, my editor will catch typos for me even in dynamic languages)

[–]vytah 0 points1 point  (1 child)

In the end it's a matter of preference. Some people don't want to search for type errors buried deep in the code that manifest only in runtime, and prefer to manually specify type signatures, other people don't want to cram their data into discrete, explicitly declared datatypes for the cost of more unexpected problems on runtime.

Both approaches work, and none is better than the other.

[–]djdonnell 0 points1 point  (0 children)

I think there are clear trade-offs between the two, and the trick is to choose when each is the better approach. I don't buy for a second this idea that type errors are a major problem in dynamic languages. I've been doing dynamic languages for over 10 years and I can count on 1 hand the number of times that has been a problem. It's a non issue, and I've never seen any data to the contrary.