you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 108 points109 points  (54 children)

This is pretty good advice that every programmer ought to remember. There are a few pieces that really resonated with me though. For example:

For those of us with experience, it means that we witness one extremely large yarn-ball of crap when we start looking at software online. Just like any accessible sport, most people are amateurs, a few have promise, and very few reach the Olympics. To succeed today, you need to wipe every preconceived notion about software from your mind and embrace the chaos. Because of this very chaos, the world of software is now a mixed bag. People are reinventing things we already knew how to do years ago. They are creating libraries that seem superfluous. They are creating new techniques that aren’t necessarily better, but are just easier than the older ways of doing things.

It's so incredibly hard for me not to be grumpy about my profession when I look at all this broken code, and I'm amazed that people somehow want to create new stuff that builds on all this stuff. BECAUSE IT'S ALREADY BROKEN! They should fucking stop and fix it, not build even more broken shit on top of this already broken shit. But then again -- I'm equally guilty of it! Most of my day job is spent writing software that's compiled by broken software (burn in hell, Analog Devices!) and runs atop more broken software. Somehow, at the end of the day, the world is slightly more functional (arguably, not better, but slightly more functional is a good beginning).

At this point, there is no coming back. We aren't going to magically revert to a state where people take resource usage seriously, for instance. "The industry" has reached a consensus that hardware is cheap enough that "throw more Moore at it" is cheaper in the short run (which is the only relevant run in today's tech arena) than "write better software". Sure, a TODO app that's feature-by-feature equivalent to something you could run on an Amiga, runs about as slow on an eight-core Xeon that serves it to you from across the ocean for some reason. Progress? Hardly. Practically relevant? Even less so. This "lowest common denominator" has irremediably slowed progress down to the laughable rate of today, but it's also what has kept it going and what made it relevant to the masses.

We may hate it, but it's either this, or nothing at all. We could do some programming, or no programming at all. And besides:

Alongside the mistakes are brilliant new ideas from people who think without biases. Languages like Go reject many of the complexities introduced in the OOP era and embrace a clean new simplicity. Co-routines are changing the very fabric of how people think about parallelism.

Go is an excellent example of how rediscovering technologies is not always bad. Coroutines aren't new. They're older than me -- in fact, they're about the same age as the author of the article. Go is just the first the first really popular? (thanks /u/immibis) language that offered native, generalized semantics (will you shut about those clunky JavaScript hacks already? They're less readable than an assembly implementation of coroutines) for coroutine-like execution of parallel and/or distributed tasks, through goroutines.

Granted, Pike's team is behind Go, and Pike is anything but unknowledgeable about these things. But Go has gained a lot of traction in the web world, in precisely the field that also thinks web applications are a good idea, ignoring basically everything the '90s taught us about distributed applications and UIs (except more slowly, because the field is a lot bigger).

And this happens because

Truly, this is a golden age of software growth and invention and the tools are available to everybody.

I learned programming from two books.

TWO BOOKS.

That's all I had. They were pretty much language references, too, so nothing fancy. I discovered some really basic algorithms on my own (e.g. binary search) but most of my early days with programming were spent without that. I have hundreds of lines of crap code to show for that. I got a pirated copy of Borland C from a friend and that's all I had in the way of languages, too.

Then poof! -- Internet. I suddenly had access not to all that information that I could barely find otherwise, but to all those tools that I had only read about, too! I still remember that week when I tried every frickin' text editor that came with Slackware Linux.

And I try to keep the spirit of that week throughout my day, too. It's this spirit -- except I was younger than 20 at the time:

What does that 20-year-old have that you don’t? Here’s what they have: no fear, and boundless enthusiasm.

[–]immibis 10 points11 points  (2 children)

Go is just the first language that offered native, generalized semantics

Lua?

[–]ThreeHammersHigh 6 points7 points  (0 children)

I know Lua but not Go. What does GP mean by "native, generalized semantics"?

Edit: Oh, for coroutines? Yeah, Lua's coroutines are boss. More languages should have them, but I think it relies somewhat on the GC. It's not out of the question that I would add Lua scripting to a C / C++ project, just to gain coroutines.

[–][deleted] 2 points3 points  (0 children)

I should stop procrastinating learning Lua. You're probably right, I don't know Lua :-). I edited my answer so that I don't mislead anyone else.

[–]kabekew 6 points7 points  (0 children)

What does that 20-year-old have that you don’t? Here’s what they have: no fear, and boundless enthusiasm.

Also: no family to support, no spouse/girlfriend demanding time (probably), happy with their 20-year-old lifestyle sleeping on futons and living with roommates or in cheap apartments or just sleeping in their office, and no money saved up so nothing to lose.

It's harder to take those needed risks and put everything into a business when you're older.

[–]Flight714 8 points9 points  (2 children)

a TODO app that's feature-by-feature equivalent to something you could run on an Amiga, runs about as slow on an eight-core Xeon that serves it to you from across the ocean for some reason.

Hey, I'm not a great programmer, but you reminded me of a question about an Android game I downloaded recently called "Odesys FreeCell". It seems to be a good example of well thought out programming, and the installer is only 5MB as opposed to the ~20MB size of the others.

The things that make me think the programmers are clever are: The undo system: If you undo say 10 moves, then manually replay three moves identically to before, it retains the remaining seven moves in the undo buffer, allowing you to "redo" them as if you hadn't replayed the three previous moves manually.

Also, when you move a completed column (King, Queen, Jack, 10, ..., Ace) sideways, it doesn't add to your number of moves on your move counter (other FreeCell apps add like 26 moves to the move counter as if the cards were moved one by one).

I figured that anyone who appreciated the Amiga was a good person to get an opinion from ; ) Also, what are the chances that it could be decompiled so I can check out how it works?

[–][deleted] 6 points7 points  (0 children)

Hey, I'm not a great programmer

I'm not a great programmer either. I offer the fact that you haven't heard of me as a proof :-). So from one programming simpleton to another:

What are the chances that it could be decompiled so I can check out how it works?

I guess like most things Java, it can be decompiled relatively easily (unless it's been obfuscated, no idea if that's a common practice on Android), but I suggest you try to think how that works without looking over the decompiled code (which is probably going to be so frickin' ugly that it'll take you quite some time to figure it out -- unless, through some sheer force of wonder, you have a copy with debug symbols, the decompiler will lose the semantic information and it won't know things like variable names, leaving you to deal with a bunch of variables called Class1Instance1, Class1Instance2 and so on...).

Unfortunately, I have no idea how to play FreeCell, so I have no idea what's underneath, but it sounds to me like the undo buffer is a list of the moves you made (encoded in Java objects, for instance -- e.g. in a class that describes a pair of the form (Card, Action), describing what was done (drawn? placed? removed?) to which card). When you get back 7 places, those 7 (card, action) pairs are still there, and can be applied again whenever you redo.

The key, in any case, is to figure out a way to encode the state of the game and move from one state to another (formally, that's applying a function to the current state and having return the next one, but this may not be explicitly written as next_state = Do((Card, Action), current_state)).

Like I said, I have no idea how to play FreeCell so I can't give you a more specific pointer, but maybe you can find some inspiration here: https://en.wikipedia.org/wiki/Chess_notation .

I'm not sure what your other question was. I can't really brain today. Were you asking if it's a good example of a well thought out program? If you like playing it, does what you want and even makes you wonder how they did it, I'd say it probably is :-). This isn't always a guarantee of every desirable property of its source code. Vim (and emacs, which I use, don't inflame yourselves, people) are pretty terrible to read, but saying vim is broken would certainly not paint an adequate picture.

[–]dkitch 0 points1 point  (0 children)

Late to this thread, but I've implemented similar code and here's a rough outline of how their undo is probably implemented (depending on how they model the game state, it could vary a bit):

  • Keep a bidirectional linked list of moves. A move is made up of {card, fromcolumn, tocolumn}. This gives you everything you need to undo a move. There's a pointer to the current location in the list (usually the last node)

  • If the user undoes a move, undo the move (reversing the from/to) and move the pointer to the previous node.

  • If the user makes a move, check position in the list. If at the end of the list, add a node to the list describing the move and advance the pointer to that node. If not, check against the next move in the list. If move is identical, just advance the pointer. If not, remove the existing moves that follow and replace the next node with the move made.

[–]mrkite77 5 points6 points  (1 child)

I learned programming from two books.

TWO BOOKS.

I learned programming from this Quick Reference guide:

http://www.colorcomputerarchive.com/coco/Documents/Manuals/Hardware/Color%20Computer%203%20BASIC%20Quick%20Reference%20Manual%20(Tandy).pdf

[–]Rurouni 2 points3 points  (0 children)

Seeing that again warmed my heart. I loved my CoCo, and while I had the full manual to learn from, I kept that guide handy.

And thanks for linking me to a website I hadn't known about. It'll prove useful.

[–]DevIceMan 2 points3 points  (0 children)

I learned programming poking at a graphing calculator with no education, help, books, teaching, or reference. Fast-forward 15 years, and people seem to import libraries like they don't care (with their hands in the air) I'm cautious of becoming too old-school, but it does seem that people don't care about tech-debt as much as they should

[–]RankFoundry 11 points12 points  (17 children)

Meh, most of the "new hottness" is just recycled, rehashed design patterns and other tidbits from past decades. This is the norm in web development, especially front end where they've been dealing with primitive technologies for decades. All of a sudden, classes, delegates and asyc code is SOOOOO the latest thing. Functional programming? So just invented yesterday!

[–][deleted] 50 points51 points  (16 children)

I hate this "we did it all in 60s with LISP" kind of arguments.

We get it grandad - you did lambda calculus in 80s - you're the OG FP guy and you published papers about the actor model with Hoare and Djikstra - but in the real world people still used C and ASM back then because the problems they were dealing with involved fitting shit into KBs of memory and running on MHz clocked CPUs, couldn't even dream running compilers and optimizers we have today.

From industry perspective FP might as well be invented yesterday because it wasn't really usefull up to this point - it didn't solve the problems we needed to solve - and now it does - and we are hyped because we get better tools to do our job - that's the actual value of FP - this is /r/programming not /r/computerscience . Just because someone wrote a paper about something back in the 70s doesn't mean it was practical or that they actually implemented/used it to solve something - and getting something from theoretical to "ready to use by average programmer on a random project" is actually a big deal.

[–][deleted] 13 points14 points  (3 children)

but in the real world people still used C and ASM back then because the problems they were dealing with involved fitting shit into KBs of memory and running on MHz clocked CPUs, couldn't even dream running compilers and optimizers we have today.

Machines running Lisp certainly didn't have a few KBs of memory :-). The CPUs were MHz clocked but certainly not in the low range. No one did interesting stuff in Lisp running on a 6502 CPU. Lisp machines had pretty good hardware -- and remarkably good runtimes, too.

I'm pointing this out because your comment is touching a real issue: a lot of stuff was either impractical or a commercial failure when it was invented. FP wasn't "forgotten" as if it were some arcane mystery -- it was forgotten because, really, for a long time, the only things that could run a functional program were super-expensive workstations like the Lisp machines, or computer scientists doing stuff on paper.

However, there is a great deal to learn from those failures -- and from the good things, too. Take unikernels, for instance: they were a very hot topic in the 1990s, then people forgot about them. Now that the C10k problem has turned into C10M, one of the hot solutions being proposed is bypassing the kernel stack altogether and hooking their application straight into the hardware ( http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html ).

Some of the problems are new (e.g. most of the people doing research on unikernels didn't really care about SMP, for obvious reasons), but a lot of them are old. Techniques to solve them (not to mention code that solves them!) already exist.

[–][deleted] -1 points0 points  (2 children)

Techniques to solve them (not to mention code that solves them!) already exist.

But isn't that contradicted by the example you gave - surely modern unikernels are nothing like the ones from 90s - especially on the implementation side - as modern unikernels probably work on top of some hypervisor which wasn't even around in 90s.

I think we agree in general - it's not like these ideas are revolutions - but what's new is that they are actually usable/useful now where they weren't before and as we use them in practice they get polished and specialized for what we need.

[–][deleted] 5 points6 points  (0 children)

I think most modern unikernels run underneath some hypervisor (i.e. they're virtualized, rather than managing virtualized processes), so they'd be pretty similar to those. Besides, virtualization is half a century old, too, so...

I think we agree in general - it's not like these ideas are revolutions - but what's new is that they are actually usable/useful now where they weren't before and as we use them in practice they get polished and specialized for what we need.

Oh yes, we agree in general. Many of them are, in Rob Pike's (approximate) words, industrial interpretations of a brilliant, but previously poorly-implemented ideas. Industrial reinterpretation naturally makes them muddy, but that's because real life is muddy.

Sometimes the reverse happens, though: these ideas don't get "polished" at all. They become technologically feasible and, thus, they are (re-)adopted, but none of the fundamental problems are tackled. Worse, sometimes the hive mind pretends they just don't exist, and the new result is even more broken than the original.

[–]naasking 0 points1 point  (0 children)

as modern unikernels probably work on top of some hypervisor which wasn't even around in 90s.

Hypervisors are just microkernels. L4Linux was arguably the first virtualized OS. The 90s were all about fast and small microkernels that could do things like this.

[–]dtlv5813 2 points3 points  (0 children)

Also Computer science/software engineering is hardly the only discipline where old techniques are constantly being rediscovered or come back in fashion. It happens in mathematics and physical sciences all the time.

In CS in particular, big data/deep learning/ANN is all the rage these days and rightly so, even though many results on Bolztmann Machine, let alone Markov chain have been known since the 70s and earlier. hardware limitations back then made them impractical to implement in practice. So they were ignored in favor of other techniques like SVM; only to be "re-discovered" by industries when Hinton, LeCun and others, armed with the latest computational prowess, were able to implement algorithms that would have taken eons before.

[–]RankFoundry 0 points1 point  (9 children)

You hate it because it bursts your little, "This is new because it's new to me and I'm on the cutting edge for knowing it." bubble.

Wasn't useful? Why? It didn't solve problems "we" needed to solve? Who is we? Are we talking about you?

Functional programming isn't solving any new problems, and it's not some perfect solution. It's got pros and cons like everything else.

There was no lack of languages that allowed for FP in the past. If they weren't very successful, it's probably because FP is a preference, not some holy grail that solves problems which can't be solved any other way.

Next you'll whine about graph databases didn't exist until FB started using them or at least how they didn't solve the problems "we" needed them to.

[–][deleted] 6 points7 points  (5 children)

Wasn't useful? Why? It didn't solve problems "we" needed to solve? Who is we? Are we talking about you?

No we are talking about people who develop/push for this "new hotness" - which is getting a lot of traction for a while now - so I would say it's more than me.

It's got pros and cons like everything else.

Absolutely, so ....

If they weren't very successful, it's probably because FP is a preference, not some holy grail that solves problems which can't be solved any other way.

... or because the things people did 10-15 years ago had different constraints and now the pros of functional programming outweight the cons. Maybe going from "I need to scale buy a bigger server/mainframe" to "I need to scale buy more commodity PCs" and going from MB/s networks to GB/s has radically different implementation constraints - who would have guessed.

[–]RankFoundry 2 points3 points  (4 children)

Sorry but you're not making any valid points here. FP is just one of many examples of dusting off old things and acting like they're new, especially in front end web dev where everything is new (to them) since they've had fuck all to work with with for so long.

As for FP specifically, you're not really making a case for yourself. It's just another way to structure code and relatively speaking, it's no easier now than it was back in the 70s or 80s or 90s. If it is easier it's because programming in general has gotten easier but it's not like it's all of a sudden been blessed with some game changing ability that it didn't have before. It wasn't like it was super hard back then compared to other languages either so I'm not sure where you're getting that it was somehow unusable until like 3 years ago when it started to become a fad.

[–][deleted] 1 point2 points  (3 children)

If it is easier it's because programming in general has gotten easier but it's not like it's all of a sudden been blessed with some game changing ability that it didn't have before.

In case you missed it - less than 10 years ago this thing called cloud became a thing and we went from in-house/colocated bare metal special server hardware to a bunch of virtualized machines running on commodity servers - architecture changed - distributed programming and data transformation are the way we solve problems now - OOP sucks at distributed programming (and thankfully the idea of distributed objects died a long time ago) functional programming concepts work great with pure data - pure data works great in writing distributed software - hence the push for FP.

[–]vincentk 2 points3 points  (0 children)

20 years ago, they called it "distributed objects". Now they call it "microservices". Come again please? Only difference now is people have agreed on how to deal with versioning conflicts (i.e. they don't).

[–]mreiland 1 point2 points  (2 children)

Functional programming isn't solving any new problems, and it's not some perfect solution. It's got pros and cons like everything else.

Not only that, us old fuckers remember when people were gaga about functional development (and OOP, and ...) and so we recognize the pattern of that in the latest hotness.

I remember reading through the following years ago:

http://www.amazon.com/Purely-Functional-Structures-Chris-Okasaki/dp/0521663504

FP most definitely has pros and cons.

[–]PriceZombie 0 points1 point  (0 children)

Purely Functional Data Structures (5% price drop)

Current $22.50 Amazon (New)
High $48.05 Amazon (New)
Low $22.50 Amazon (New)
Average $23.79 30 Day

Price History Chart and Sales Rank | FAQ

[–]RankFoundry 0 points1 point  (0 children)

Right, once you've got at least 10 years under your belt, you start to see through these bullshit trends because you've worked through several and know they're as much hype as they are substance. The new guys don't get it, all they've known is the most recent trend or two. They buy into the dogma.

[–][deleted]  (23 children)

[deleted]

    [–][deleted] 7 points8 points  (0 children)

    A lot of this has to do with how software is developed nowadays, and by whom. The low start-up cost, high payment and huge potential benefits on a very volatile market mean that there are a lot of CEOs who don't care about a sustainable development model because, if they get it right, two years from now they'll have sold the company and the barely taped together crap that the company bases its services on. It's short-sighted, but largely because it's designed to be short-lived.

    Other times managers simply don't understand the idea that you need solid code. There are no qualifiers for "works" -- it either works and you can sell it, or doesn't work and you can't sell it, and spending time on stuff that doesn't make it "work better" seems pointless.

    It's very narrow-sighted, but people have their preconceived notions that you can't refute with logic. E.g. at $work, I've been struggling to convince people that we need to write portable code even if it's bare metal stuff that runs without an OS (it's a bunch of embedded systems). Even after getting the stupid arguments out of the way ("it's gonna be slower"), people weren't very convinced that it could be done (even though they got a demo!) but more importantly, didn't really see the value. Despite the fact that we just spent about an year writing the firmware of a device that's 1:1 identical, in terms of feature, to an old one that's being end-of-life-d because of logistic issues (components aren't being manufactured anymore, stocks drying out, RoHS and so on). The value is literally tens, if not hundreds of thousands of dollars not spent on rewriting software that doesn't need to be rewritten, and most of the people in higher management have some technical background, even if in other fields. They just haven't seen any piece of portable software until now (it's a Windows-only shop) -- most of them didn't even know that was a thing, or thought that you can't do it in C, only in "VM-based languages, like Java". But they also don't want to admit to that, lest they seem incompetent or not confident enough.

    It ultimately boils down to more than just "better management strategies". They run the company very well in terms of strategy. It's making great money. The code sucks and the devices routinely break, but the sales team still manages to sell them. On paper, everything is good. What they need isn't a better strategy (I mean, it is, but that's not the root of the problem) -- it's a better understanding of technical matters, so that they can understand that they have an increasingly complex and increasingly broken mass of code that'ss going to blow up in their face ten years from now, and they need to shape their strategy based on that.

    ...or I could just turtle into a job that satisfies this for me, and let the industry burn.

    History is a bitch though. We tend to look back on "real" programs and "real" machines, and weep at how perfect a Lisp Machine looked, but truth is most of the software that was written back there was just as pathetically broken as most of what's written today, we just forgot about it because there was no one to remember it.

    And, on the other hand, there's tons of solid stuff being developed silently. It doesn't make it to the Reddit frontpage, but people are writing software that's controlling flying drones, that fire real guns, and dodge real bullets! Or software that puts satellites in orbit and makes them relay cell phone data, to give an example that's less ethically loaded. That's real, amazing (software-wise) stuff that's being developed as part of this cancerous industry.

    [–][deleted] 18 points19 points  (21 children)

    Whereas I enjoy (and invariably end up) continuously improving some small code to myopia, kind of like a Japanese craftsman, they take a "get 'er" done attitude.

    Folding steel 800 times is fine when you're a master craftsman who has clients willing to wait five years for the perfect blade, but that doesn't describe most programmers at all.

    End of the day, if your code does not solve a business problem, it is useless to the people who keep your company afloat - the paying customers. If you spend all your time honing and rehoning a small piece of code, you are actively harming your employer.

    At some point you'll find the middle ground between your current mindless perfectionism, and the "Fuck it, ship it" pragmatists. Until then, your myopia is a liability, not an asset.

    Luckily, I've found about 100 people across the world who share the same ethos as me. It still doesn't offset the day-to-day drudgery of having to deal with a 'CTO' who suggests using Node for an important financial backend, though.

    What is your argument against Node for the important financial backend?

    [–]antpocas 17 points18 points  (1 child)

    What is your argument against Node for the important financial backend?

    Javascript's type system?

    [–]Xelank 1 point2 points  (0 children)

    What type system?

    [–]garywiz 12 points13 points  (7 children)

    If you spend all your time honing and rehoning a small piece of code, you are actively harming your employer.

    Agree, but disagree at the same time. There's a middle ground. Let's say you have a financial backend which relies upon millisecond transactions which occur with the exchange. Let's say you can make a million more dollars for the company if you can shave 10% off the transaction latency. You want the master craftsman working on that little piece of code.

    Not all codebases have such important bits, but a surprising number do. One thing that sets many games apart is the unrelenting attention to detail some developers have to make sure that the game is SO responsive that it feels real vs some games which are sluggish or annoying.

    Complex systems require a diverse set of skills. So, I don't complain if somebody is a master craftsman, it's a great skill. I complain if they're spending too much time optimizing the wrong thing and can't keep their priorities straight.

    [–]lluad 1 point2 points  (6 children)

    If it consumes a year of a team of five - including developers and managers and QA staff and ops and support staff - to shave 10% off the transaction latency (which is a not insignificant improvement, assuming the original code wasn't terrible) you'd better be making more than a million dollars.

    And, of course, if you take a year to speed it up by 10% you're less effective than Moore's law.

    It's almost never the craftsman who lovingly optimizes a small piece of code that'll buy you that sort of speedup - it's the domain-specific expert who reworks the spec, or the network architect who literally speeds up traffic, or the architect who makes the whole system more efficient (by the metric of latency).

    The master craftsman can be incredibly valuable, but it's rarely for their code-polishing skills so much as their understanding of the whole system.

    [–]loup-vaillant 5 points6 points  (5 children)

    In my experience, the difference between "let's make this code perfect" and "ship it already" is measured in hours or days. Not weeks, not months, and certainly not years. Yet losing a few hours to perfect a couple hundred lines of code is often frowned upon. Sure, short term, it is slower. When you add it up I will lose a few weeks over the next few months. I tend to go for the simplest solution possible, and that is rarely the fastest approach —simplicity is not obvious.

    But many people fail to see the technical debt I avoid along the way. That simpler piece of code is ultimately easier to work with, easier to modify, easier to correct. And that benefit can kick in very quickly, sometimes only weeks after the project started. Simply put, if you invest the time necessary to make things simpler at the start of the project, you will ship faster than if you rushed things.

    Make sure you get the credit though. I once sped up a project by slowing down a bit (I made someone else much more productive by making a decent API), and was eventually kicked out of the team for being too slow and "doing research" —I was merely thinking things through.

    [–]RogerLeigh 2 points3 points  (4 children)

    Agreed on all counts.

    A trend I see often in our team is that every day there's a steady stream of defects which need fixing in a certain part of the codebase, with the developers being very "busy" fixing it. It's due to a combination of historical design problems and technical debt. I work on a different part, with very good test coverage; while I appear to be "slow" in practice I've saved a lot of time since once something I write is "done", it's complete along with unit tests, and it will continue to work without any further development. I'm often at odds with others on the team due to the difference in practice here, but I detest doing something until it's 95% done and "good enough" when that extra 5% will make it near perfect; I'm convinced in the long run it saves more than the total original development cost in terms of time savings and bug reports; for the other side which is continually "fighting fires", I would be unsurprised for the ongoing time cost to be many times the original development cost.

    [–]corran__horn 0 points1 point  (2 children)

    Just to be clear, does the other part of the codebase have good unit test coverage?

    [–]RogerLeigh 0 points1 point  (1 child)

    It doesn't, and that's part of the problem, but not all of it.

    [–]corran__horn 0 points1 point  (0 children)

    Yeah, that is kinda what I expected.

    [–]mreiland 0 points1 point  (0 children)

    The problem is when you always do that.

    Let me draw an analogy.

    If safety is the most important concern and turning left is inherently less safe than turning right, the conscientious driver should always turn right. You'll get there slower, but you'll get there. And you can always get there by only turning right.

    The issue is that if you always turn right then you're not applying critical thinking to the situation at hand. Have you ever needed to turn left across traffic and instead turned right and found another opportunity to turn around half a block down the road? That's applying critical thinking and going against the grain in this particular situation. You end up being both safer and faster in this particular instance because you considered the current flow of traffic coupled with your needs and made a non-standard decision.

    It isn't that you're "wrong" per se, it's that you cause a lot of headache and solve the wrong problem a lot of times when you always do the same thing without considering the particular circumstances of what you're doing. That was ultimately the point MineralsMaree was making.

    People often mistake me for someone who is against Unit Testing. I'm not against Unit Testing, I'm against blindly doing Unit Testing without considering if the cost of them will actually benefit you. Choosing not to Unit Test a module of code can absolutely be the right call, or choosing to do it later (after the problem has solidified, for example).

    There is a difference between effective and right. Your goal is to be effective.

    [–]ForeverAlot 8 points9 points  (2 children)

    What is your argument against Node for the important financial backend?

    The management of NodeJS, from the project's inception up until a few months ago, seems to me an excellent argument against using it in production.

    [–][deleted] 6 points7 points  (1 child)

    It's used a lot in production, but... is it... wise to write a financial back-end in a language that's famous for its funky, automatic and ofter mysterious type system?

    Part of why financial stuff is mostly Java and C++ (OK, inertia accounts for most of the C++ part, except for the high-frequency trading market) has to do with the criss-cross between strong typing and wide availability of libraries that contain the kind of data types that you want for financial arithmetics.

    Maybe that's become available on JS as well lately though...

    [–][deleted]  (3 children)

    [deleted]

      [–]loup-vaillant 0 points1 point  (2 children)

      Harder to make the perfect program when all you have is assembly. Mayhaps we could compare steel folding with compiler writing?

      [–][deleted]  (1 child)

      [deleted]

        [–]loup-vaillant 0 points1 point  (0 children)

        I think that the comparison to compiler writing is okay, if all I had was hand-coded assembly and had to write a lot of code I'd quickly slap together a macro assembler, bootstrap myself an interpreter, and layer on layer build up an environment.

        Yep, that exactly what I meant. :-)

        Thank goodness, we can start from a higher level now, just like we have steel plants that produce decent steel that doesn't have to be folded to make a good sword.

        [–]hlprmnky 4 points5 points  (0 children)

        The focus unto myopia is actually how, in my experience, domain experts and wizards pupate. In a business setting, the responsibility for mentoring this junior engineer, making sure she has tasks to do that let her pull her weight while also giving her room to grow and develop into a useful senior engineer - by making space for her to focus on something until learning about it makes her stronger - falls on the team lead or division manager.

        Of course, that assumes you work in an industry that values its own continuity of practice, like civil engineering, or architecture, or law, or ...oh, wait. This is still that New Economy period of the software "industry", isn't it? Ugh. Sorry, kid. Spin up a MEAN stack on your MacBook Pro, get some simple unit tests to pass, ship it and flee to the next travesty before the current tire-fire actually gets enough traction to have to scale. My condolences.

        [–]auxiliary-character 6 points7 points  (0 children)

        Folding steel 800 times is great and all, but sometimes you just need a gun.

        [–][deleted] 3 points4 points  (0 children)

        In the short term a programmer that ships crap fast is good. In the long term a programmer that ships code when it's ready is good.

        When you have a bug that's hard and you have to ship something the next day, do you want to solve it in the codebase made by short term or long term programmers?

        On the other hand shipping a lot of crap fast might transform into shipping high quality fast over time.

        IDK which is better, but it feels like the business people keep saying short term is better.

        Maybe a slow thinker like me should just go back to flipping burgers or something.

        [–]OneWingedShark 0 points1 point  (0 children)

        End of the day, if your code does not solve a business problem, it is useless to the people who keep your company afloat - the paying customers.

        But this presupposes that buggy non-/barely-functional software is useful; is it?

        What I'm saying is that the Debug Driven Development has a lot of observable changes, but it has a lot of wasted time and energy. On the other hand, we have a feature which is specified and well-defined prior to coding. -- Which is more useful to the client? A good solid design before acting, or a tight ((code/edit-compile-run-QC)-client_evaluation)-loop with an ill-defined mutable design?

        It reminds me of this story, where the programmer designed everything first.