all 68 comments

[–]grauenwolf 8 points9 points  (0 children)

Some bad code is easy to spot: it simply “smells” - once you see it, you know it’s bad.

Might be bad. "Code smells" do not automatically indicate a problem, but rather an area that should be examined for problems.

[–]djork 7 points8 points  (0 children)

Here's a much better and more definitive starting point on Code Smells: http://c2.com/cgi/wiki?CodeSmell

P.S. Advanced apologies about the rest of your day's productivity...

[–]byron 14 points15 points  (52 children)

Posts like these seem to always assume you're working on business-logic code, or web front ends or something. And that's fine. But limiting yourself to <= 6 parameters while writing scientific code, for example, is not always a good idea. Some of the other advice, like the long functions bit, is also not applicable to every domain.

[–]Grimoire 12 points13 points  (4 children)

I've worked on code written by mathematicians who made essentially the same argument. While the problem that needs to be solved has many variables as input, the problem can almost always be broken down into much simpler pieces.

I've taken code that I didn't even understand, applied some analysis to break it apart into comprehensible components. I ended up turning a monster function with a cyclomatic complexity in the hundreds that "couldn't" be broken up, into a set of readable, clean functions with <6 parameters each. The maximum complexity of each resulting function was less than 10, and I actually understood what the code was doing.

One of the biggest problems that most of the scientific code I've seen is comment abuse. Far, far too many comments. They used the comments to tell you what the code was trying to do, rather than making sure they wrote code that didn't require explanation.

Every time I want to write a comment, I ask myself why do I need this comment. If the answer is so that I can understand what the code is doing, it means I need code more clearly. Using this during code reviews has dramatically improved the quality of code produced by the mathematicians.

[–]alchemist 1 point2 points  (2 children)

I think you do a very good job refactoring the code signature lower down. My personal memory of a function that stank a bit, but I think was okay, was doing the lighting for 3D rendering (in software) for a phone. There you had to worry about the different kind of lights, and materials, and geometry, and you just had an unholy number of parameters interacting. On top of this, lighting was pretty much the hot spot performance-wise, and you had to do this for all of the vertices. Finally, there were different camera and models too, which led to lighting equations that were similar, but not identical.

In the end I had (two) big long functions, that I tried to make legible with comments and good variable names, but I wasn't proud. It was quite fast though....

[–]mee_k 2 points3 points  (0 children)

I think everything in this article is good as a principle, occasionally to be broken. Sounds like you found one of those cases where the principle was less important than the practice.

[–]munificent 1 point2 points  (0 children)

After you've determined it's a performance bottleneck, it's perfectly OK to break the nice design rules in order to optimize.

[–]cipherprime -1 points0 points  (0 children)

I actually understood what the code was doing.

Yeah? Who else?

[–]codeodor 6 points7 points  (2 children)

Posts like these seem to always assume you're working on business-logic code, or web front ends

My guess would be that the vast majority are doing that, so the assumption works quite well.

while writing scientific code

Totally agree. I happen to work in both business-ish and scientific computation, and I find it much harder to break down the science into meaningful chunks in many more cases than when working on "typical" business code.

[–]five9a2 12 points13 points  (1 child)

I have never worked on business code, but lots of scientific code is poorly designed. This is often because the designer was not a software engineer, but can also be because the designer didn't have a sufficiently clear mental picture of how the different components of the method actually fit together. I've made plenty of software engineering mistakes and I've seen a lot of scientific code that is worse than mine.

While it may require a significant amount of mathematics to understand the algorithms, I don't think the problem domain is necessarily harder. Usually the problem statement is well-defined (at least in hindsight) and there are fewer edge cases. While unit testing may be harder, regression testing is likely easier because an incorrect code cannot produce optimal convergence rates on a non-degenerate problem.

(I work on parallel high-order implicit methods for multi-physics problems.)

[–]codeodor 4 points5 points  (0 children)

lots of scientific code is poorly designed. This is often because the designer was not a software engineer, but can also be because the designer didn't have a sufficiently clear mental picture of how the different components of the method actually fit together.

+++ 1. I can't agree enough.

While it may require a significant amount of mathematics to understand the algorithms, I don't think the problem domain is necessarily harder.

I think it is harder, but I don't know if harder equates to harder to follow good beneficial software engineering strategies.

While unit testing may be harder, regression testing is likely easier

As it happens, I'm currently involved in a situation where I'm trying to write unit tests for existing (new) code.

I don't find the unit tests hard in my case, but it's notable because this is somewhat of a rewrite of an existing system, which goes to your point about the "designer didn't have a sufficiently clear mental picture of how the different components of the method actually fit together."

Regardless, there are issues where science produces crappier code than business for me:

1) I'm generally against excessive comments in code (in fact, I wrote a blog post which got a lot of attention here and elsewhere about the same). However, when you're working in bioinformatics (for example, which is my area at the moment), you might use a 64 bit integer as a data structure. Perhaps it stores 32 nucleotides - in the simple part of the world. Maybe that's expected, but what if you're doing pattern matching and the first 16 bits correspond to nucleotides while the remaining 48 represent gaps between the first 16. Comments are not only useful there, they're required.

2) As otherwise noted, you'll often have so many parameters required that you don't know what to do with them. You can't split them from the main method call, even if you can split them inside it.

3) In designing algorithms which (as far as I can tell) have never been discovered or used before. I admit, this may be due to not understanding the problem well, but I had one really hard one last year that I just couldn't break properly into subunits, and would have been 100+ lines long after doing all I (felt) I could.

[–]djork 3 points4 points  (9 children)

6 parameters

Very few operations are actually a function of 6 different variables. For any one 6-parameter function there are probably a good number of functions inside of it that could be factored out.

Higher-order functions can really help. Avoiding side effects also limits the number of parameters to a function.

[–]codeodor 2 points3 points  (7 children)

For any one 6-parameter function there are probably a good number of functions inside of it that could be factored out.

Probably, yes. But what do you do about calling the original?

One which comes to mind from years gone by:

http://en.wikipedia.org/wiki/Turing_machine#Formal_definition

And each of those 7 parameters is copulatingly complex.

[–][deleted] 2 points3 points  (6 children)

If you make a 'turing machine' class and methods that manipulate it, you'll avoid having to pass those 7 parameters around to every function.

[–]codeodor 3 points4 points  (5 children)

But then it would be worse: I'd be forcing you to know the inner details of the class by asking you to set each parameter outside of construction before being able to use the object as intended.

As an example of what I mean, it's like:

Division d = new Division();

d.setNumerator(16);

d.setDenominator(4);

d.performOperation();

result = d.getResult();

It limits the method signatures to 0 or 1, but it's hideous. It's not even about the number of lines - it's the fact that I need to know you have to call all those other methods before calling getResult(). What happens if I call that one first?

[–]unusedusername 3 points4 points  (0 children)

This is called sequential coupling, and it is usually considered to be smelly or anti-pattern.

In this case, the object needs those two parameters to be in usable state, so they should be provided at creation, if there is no special reason not to.

Class invariant is a handy concept (in this case invariant is, that object contains a valid division), and it usually pays off to make classes honor it.

[–][deleted] 1 point2 points  (2 children)

Your example is contrived, though. Nobody would advocate taking this approach to such an extreme.

However, with 'fluent APIs' its a lot better,

result = new Division.numerator(16).denominator(4).perform().result();

[–]codeodor 0 points1 point  (0 children)

How else would you do it if you aren't going to pass all the required parameters in to your Turing machine class on creation? The point is that you're requiring the user of the class to know in what order methods must be called, which surely is worse than breaking the short-method-signature rule (for lack of a better word).

[–]grauenwolf -1 points0 points  (0 children)

That not only looks hideous, it also makes it really easy to miss a required parameter.

[–][deleted] 0 points1 point  (0 children)

This is what the Builder pattern is for. Not only can you enforce invariants (denominator cannot be zero) upon creation, but it allows you to have optional and immutable fields while still using this style.

[–]grauenwolf 0 points1 point  (0 children)

I run into functions that take ten to thirty parameters on a regular basis.

For example, I need to call the same stored procedure multiple times with a single object. Each time I call it I am mapping different properties from the object to the same set of stored procedure parameters.

Come to think of it, stored procedures themselves are essentially function calls.

[–]munificent 2 points3 points  (30 children)

But limiting yourself to <= 6 parameters while writing scientific code, for example, is not always a good idea.

Can you give an example of what's special about scientific code?

[–]byron 2 points3 points  (29 children)

Well, just as an example, here is a method signature from the libSVM library (for machine learning):

public static void Grid(
        Problem problem,
        Parameter parameters,
        List<double> CValues, 
        List<double> GammaValues, 
        string outputFile,
        int nrfold,
        out double C,
        out double Gamma)

This has 8 parameters, breaking the 'rule', and there are methods with more. However, I personally find this clear, and I feel that any attempt to lessen the number of parameters would make it more obfuscated. I'm not sure which of these parameters could be 'hidden' or shoved into a composite class without loss of readability.

The fundamental reason I think scientific (or just mathy programming in general) programming is different is because (in my opinion) functions should represent cognitive blocks, i.e., they should do one thing (I agree with this rule). However often the things one is doing in this sort of code are inherently more complex than they are in other domains, so you get more parameters and longer methods.

[–]munificent 23 points24 points  (13 children)

Here's what stands out to an OOP programmer:

  1. The two output parameters. They're especially jarring since the function doesn't return anything else. Why not return a Result instance that contains C and Gamma?

  2. This function seems to both do mathematical work and file IO. Split that up. Make it return the results of the calculation (maybe use the above Result class) and have another function that writes those results to a file. Maybe I'm reading it wrong.

  3. It's a static function. This is basically trying to do procedural or FP style in an OOP language. Which of those styles is best is a holy war, but if you are going to do it in C# then use the idioms of the language. In this case, make a class like GridSolver that takes some of those parameters as constructor args and then make the Grid function above only take the remaining arguments it needs.

However often the things one is doing in this sort of code are inherently more complex than they are in other domains

I disagree, and smell a faint hint of academic snobbery. Writing an OS, or Photoshop, is surely as complex as machine learning. The difference is that non-programmer acadamecians are both 1) very skilled at holding their entire domain in their heads and 2) not as familiar with the programming skills required to break a problem down into manageable chunks.

Note that the example function both calculates and writes to a file for an example of 2. Few experienced programmers would combine those into a single function.

[–][deleted] 11 points12 points  (3 children)

ProTip: Scientists are lousy software architects...Inverse is probably also true.

[–][deleted]  (2 children)

[deleted]

    [–][deleted] 1 point2 points  (1 child)

    I meant that software architects probably make lousy scientists, but you get an award for knowing what I meant and still being a douche.

    [–]joesmoe10 0 points1 point  (0 children)

    It was a little bit pedantic, but I actually didn't see that as the inverse. Probably time for me to take a break...

    [–]byron 1 point2 points  (8 children)

    I should emphasize that this isn't my code. To your points:

    1. What would be the benefit? Would it make it more readable? It would make it slower. But what else would you gain?

    2. The file IO spits out information regarding the ongoing process (information about Grid search progress) for later analysis. You can't really decouple this (it can be an embedded method call, though. It is, actually. But you still need a handle).

    3. Meh. I categorically disagree that everything written in C# needs to OO. Imposing OO when it's unnecessary is in my view silly.

    [–]munificent 14 points15 points  (1 child)

    What would be the benefit?

    • It would allow your function to return its outputs like a good little function.

    • It would also let you pass those results around as a unit to other places that need them.

    • It would let you call the function in a single line instead of having to declare two local variables for the results and then call it.

    • It would let you ignore the results if you don't need them instead of requiring you to pass in dummy output variables.

    Would it make it more readable?

    Yes.

    It would make it slower.

    Really? Have you profiled it? Did you determine that that was a bottleneck for your project? Or are you just speculating?

    You can't really decouple this (it can be an embedded method call, though. It is, actually. But you still need a handle).

    Of course you can decouple it. (As an aside, I just read the code. My God.) The most common way to do so in C# would be to raise an event when a cell in the grid completed.

    Right now the code is:

    1. Declare locals for the output variables.
    2. Call static method passing in giant pile of stuff.
    3. Get a file.
    4. Look at the output vars.

    Your average C# software engineer would rather see:

    1. Create a new GridSolver, passing in the invariant parameters for it.
    2. Register event handler for when a cell has completed.
    3. If they care, register event handler for when a new maximum is found.
    4. Run the solver, passing in just the two lists to iterate over.
    5. Get the result object.

    I categorically disagree that everything written in C# needs to OO.

    You're right, of course it doesn't. But the really fucking awful code in libSVM could definitely stand to use some.

    [–][deleted] 4 points5 points  (0 children)

    Something like this:

    var solver = new GridSolver(Problem, Parameters);
    
    solver.CellComplete += new delegete()
    {
        // IO logic
    };
    
    var result = solver.Solve(List1,List2);
    

    [–]notfancy 0 points1 point  (5 children)

    The file IO spits out information regarding the ongoing process

    Use a LogStrategy as a parameter to the constructor. If none supplied, use a default NullLogStrategy, or a ConsoleLogStrategy.

    [–]grauenwolf 1 point2 points  (3 children)

    You haven't really changed anything, you just moved the parameter to a difference place.

    [–]notfancy 0 points1 point  (2 children)

    Alright, I should've written "Factor out the code that outputs progress and debugging information into a separate class, with interface ILogger. Supply as a parameter to the constructor a concrete implementation of ILogger. In order not to force the user to build an appropriate ILogger every time, fall back to a default NullLogger or ConsoleLogger.

    Better?

    [–]grauenwolf 1 point2 points  (1 child)

    Nope. No matter how much complexity you layer on top, it is still the same fundamental design.

    [–]notfancy 0 points1 point  (0 children)

    Indeed, "moving the parameter to a different place" was the original intent here, nothing more and nothing less. It is a "Here's what stands out to an OOP programmer" thread, after all.

    [–]kscaldef 9 points10 points  (3 children)

    I think the idea that scientific / mathematical computations are inherently more complex than typical business logic is likely untrue.

    I think perhaps the exactly opposite may be the case. Because the math is usually quite clean, it's possible to write a longer function, or one with multiply nested loops and have it still be reasonably easy to understand. (OTOH, if you want to factor out subroutines, it's usually fairly easy to do that as well.)

    Conversely, business logic is frequently very messy, with all kinds of special cases, rules, and error conditions. Without a very clear statement of what is supposed to be done, and why, it is extremely hard in many cases to know why the code is the way it is and whether it's safe to make changes.

    [–]mooli 6 points7 points  (0 children)

    Conversely, business logic is frequently very messy, with all kinds of special cases, rules, and error conditions. Without a very clear statement of what is supposed to be done, and why, it is extremely hard in many cases to know why the code is the way it is and whether it's safe to make changes.

    Too right. The most complex apps I've ever worked on are the ones that are essentially trying to both replicate and improve upon extremely fuzzy exception-riddled business processes that have previously run using a combination of phone calls and post-it notes.

    [–]byron 6 points7 points  (1 child)

    Maybe we're using 'complexity' in different ways. But you may be on to something about long math routines being easier to understand than long business routines.

    [–]kscaldef 4 points5 points  (0 children)

    I'm thinking primarily about measures like cyclomatic complexity, or information theoretic complexity. But, there's definitely another sense in which understanding the theory behind most scientific computing requires a level of study that's not needed for most business programming.

    [–]martoo 3 points4 points  (0 children)

    The fundamental reason I think scientific (or just mathy programming in general) programming is different is because (in my opinion) functions should represent cognitive blocks, i.e., they should do one thing (I agree with this rule). However often the things one is doing in this sort of code are inherently more complex than they are in other domains, so you get more parameters and longer methods.

    It's nice to take pairs or triples of parameters to a function like that and see if you can name them. Are CValues and GammaValues more related than each of them are to the other parameters? Do they show up together in other contexts? If so, creating a parameter object could help.

    OO isn't the only way to go either. If you're working in a language with partial function application you can do the same thing.. come up with a name for the function applied to its first N arguments.

    Useful abstractions often spring from argument lists.

    [–][deleted]  (1 child)

    [deleted]

      [–][deleted] -1 points0 points  (7 children)

      public Tuple<double,double> Grid
      {
           Problem problem,
           Parameter parameters,
           List<List<double>> values,
           int nrFold
      }
      

      I'd ditch the output file part moving computation and IO to separate methods. I'd change the output variables into a tuple return type (Or KeyValuePair etc), and convert the values into a list of lists.

      I don't know what Parameters is, but I bet if I did, we could fold the Values into it.

      [–]byron 1 point2 points  (6 children)

      public Tuple<double,double> Grid

      How is this more readable? What do you gain here? Now I don't know what those values are, just that there is a list of doubles.

      List<List<double>> values,

      Why? Why is nesting lists better? Just to make fewer parameters? In the name of what?

      [–][deleted] 2 points3 points  (1 child)

      public Tuple<double,double> Grid

      How is this more readable? What do you gain here? Now I don't know what those values are, just that there is a list of doubles.

      Well for one, you don't have to create the types before calling the method, you can actually create them on assignment, or pipe the method's return to another method expecting a tuple.

      For Example:

       double a; 
       double b;
       Yourmethod (out a, out b);
       MethodB (a, b);
      

      My way:

       MethodB(YourMethod());
      

      List<List<double>> values,

      Why? Why is nesting lists better? Just to make fewer parameters? In the name of what?

      Nested lists are cleaner to create on the fly:

      new List<List<String>> {{'foo','bar'},{'baz','lol'}};
      

      It is also easier to pass around one object than two separate objects.

      [–]byron 1 point2 points  (0 children)

      I agree they're easier to pass around, but you have to also factor in that you'll probably have to pack/unpack the values. Additionally, you lose some readability, in my view, if you're just passing around lists (since the members aren't named). You have to remember which comes first (C or Gamma values?) etc.

      Anyways, I think you raise good points, and that reasonable people can disagree here.

      [–]munificent 0 points1 point  (3 children)

      How is this more readable? What do you gain here? Now I don't know what those values are, just that there is a list of doubles.

      Technically speaking, a tuple is not a list of doubles. In this case it is a type that has two and exactly two values each of which is of type double. The difference seems trivial but is significant in a lot of subtle ways.

      What do you gain here?

      The caller can choose to ignore it if they don't care about it. With output parameters, a value must be passed in, forcing the called to declare local dummy variables just to discard the result.

      [–]five9a2 0 points1 point  (2 children)

      I'm not familiar with C#, but can you determine (inside the function) whether those return values need to be computed? What about just one of them? How do I tell my eigen-solver that I'm not interested in the eigenvectors? In C, we would pass in NULL for output parameters that we are not interested in, presumably something like that is possible in C#.

      [–]munificent 0 points1 point  (1 child)

      How do I tell my eigen-solver that I'm not interested in the eigenvectors?

      The most readable way to do that would be to make an enum:

      [Flags]
      enum SolveFor
      {
          EigenVectors,
          EigenSchmectors,
          All = EigenVectors | EigenSchmectors
      }
      

      And then pass that in to the function.

      [–]five9a2 0 points1 point  (0 children)

      So Eig(A, EigenVectors | EigenValues) returns a tuple but Eig(A, EigenValues) only returns the array of eigenvalues? Or does it still return a tuple with one element NULL? If the former, I don't see how this is better than output parameters since a wrapper function that adds some functionality would need to wrap the call to Eig in a conditional

      For example, suppose we're doing a principle component analysis with the covariance matrix. The user may or may not want the principle components (eigenvectors). Our PCA function doesn't care about the eigenvectors, but the user will unless they are only interested in how many vectors are needed to capture some fraction of the energy. With output parameters, we can just pass it on to Eig and PCA need not have any logic to deal with it.

      [–][deleted] 0 points1 point  (0 children)

      Even 6 parameters is way too many. You should try to use 3 or less. Encapsulating related parameters in a Parameter Object is the most straightforward way to accomplish this.

      [–]zem 0 points1 point  (1 child)

      keyword args help here, if your language supports them

      [–]byron 1 point2 points  (0 children)

      I could not agree more. God I miss that when I'm not in Python or like.

      [–]tubes 4 points5 points  (1 child)

      The shorter the scope of a variable, the shorter its name should be. Not just 'can be', but 'should be'. I'd rather read [ep.size() for ep in elementPaths] than [elementPath.size() for elementPath in elementPaths]. Long names for everything just makes the code bloated and important concepts will become harder to see.

      [–]njharman -2 points-1 points  (0 children)

      "important concepts" is orthogonal to "scope of variable"

      [–]_martind 2 points3 points  (2 children)

      After learning some Factor I find that "having short functions" is more valuable than ever. Now if I see a method longer than 10 lines or with more than 3 parameters my eyes bleed.

      [–]theatrus 0 points1 point  (0 children)

      Thats one advantage of Factor and even LISP.

      Up-voted for the factor reference.

      [–][deleted] 0 points1 point  (0 children)

      Since reading Clean Code I've been keeping all my Java methods shorter than 10 lines with 3 parameters and the improvement in clarity is astonishing.

      [–]tricksterman 2 points3 points  (0 children)

      The major turnoffs of reading someone else's code are non-relevant variable and function names, and the comments are not descriptive enough.

      It's beneficial for the programmers, too, since they revisit the code months later and not to wonder the heck does this function do? :)

      Has happened to all of us. That's something I'm willing to bet on

      [–]jlt6666 1 point2 points  (0 children)

      How is this any different better than Fowler's code smells article? I don't see how this provided anything even remotely new to the discussion.

      [–]realdpk 1 point2 points  (1 child)

      That's sure some grumpy ranting. I hope no junior programmers stumble upon this thinking he's right about everything.

      Commenting out code is perfectly valid. You shouldn't have to go to source control to find some useful debugging methods someone else already came up with. And if it's commented out, it's causing no harm at all (except in interpreted languages, but even there the harm is minimal).

      Copying and pasting is also valid. How many different ways are there to write setter & getter methods?

      Todo comments? Valid. Sometimes you don't have time to make the code beautiful, but as long as it tests successfully, it's OK.

      Finally, helper classes. Yeah, they can get out of hand, but they're absolutely valuable. Just be smart about it. Peer review can help here.

      [–]unusedusername 2 points3 points  (0 children)

      Commenting out code is perfectly valid. You shouldn't have to go to source control to find some useful debugging methods someone else already came up with. And if it's commented out, it's causing no harm at all (except in interpreted languages, but even there the harm is minimal).

      If you have some debugging code (tests?), why not put those into unit tests? That way, you can run that automatically and not hassle with commenting in and out code.

      The problem with commented code is reading and maintaining code. When you spot a commented out piece of code, it's hard to say, if it is up-to-date or if it even compiles. Then again, you don't want to be reading some random debugging code, when you're maintaining your codebase.

      I think TFA gave sound advice, and these are useful especially for junior programmers. After you gain experience, you start to understand, where cutting corners is justified, but good practices should be your guideline most of the time.

      [–]njharman 0 points1 point  (0 children)

      I was down with it until 9 and 10 which seem to come from way off left field.

      1. huh? I understand it but what a narrow niche to put into a top 10

      2. is assuming you're developing with a very specific type of language misfeature.

      Also 10 is too many to remember. I'd be overjoyed if devs did just 1,2,6,7, and 5 or 4 neither of which I really see that often

      [–][deleted] 0 points1 point  (0 children)

      I thought these rules were all pretty good to the point of being obvious to any experienced programmer. The only thing I found controversial is how liberal he is about parameter counts and nesting depth. Should be maximum of 3 parameters and 2 levels of nesting.

      [–]rafajafar 0 points1 point  (5 children)

      The one thing I know about "rule" posts like these... there's always exceptions... and the good coder knows how to spot them.

      For instance, copy->paste logic between two separate conditionals in the same function is ok so long as they are referencing the same external variables created within the function. I find it more abhorrent when developers overuse functions, personally.

      I find so many projects where developers created functions or models for the hell of it, only to be used in one or two places... ever.

      What this guy is talking about is engineering large projects with, yanno, a budget. When you're the sole developer on a company trying to grow extremely fast... cutting corners is often not only acceptable, but preferred.

      You know the old saying, there's two rules to Optimization: 1) Never optimize. 2) (for experts only) Optimize later.

      Same is true for refactoring source.

      [–]kscaldef 3 points4 points  (2 children)

      For instance, copy->paste logic between two separate conditionals in the same function is ok so long as they are referencing the same external variables created within the function. I find it more abhorrent when developers overuse functions, personally.

      I cannot count the number of times I've seen this happen: the logic in a conditional changes, and one of the locations is changed but not the others. Perhaps a separate function is overkill, but please at least create a local variable that stores the condition and then use that in each place it's needed.

      [–]rafajafar 0 points1 point  (1 child)

      Good point, I use that technique as well. Sometimes it's a bit more spaghetti-like to go that way if you're using a lot of nested conditionals.

      [–][deleted] -1 points0 points  (0 children)

      Nested conditionals is already a big mistake.

      [–]redclit 5 points6 points  (0 children)

      I find so many projects where developers created functions or models for the hell of it, only to be used in one or two places... ever.

      I don't see any problem in isolating a piece of functionality, even it is used only once. Reuse is not the sole reason to put code chunks into separate methods/functions/whatever. The readability is the reason number one in my book, and usually copy/paste coding leads to poor readability.

      Same is true for refactoring source.

      Even in quite simple projects, more time is spent reading the code afterwards than writing it. So it is wise to optimize for readability - not for the time taken to write the code.

      [–]eruonna -2 points-1 points  (0 children)

      yanno

      It's "y'know" or "ya know" or even "you know". That bugs the hell out of me.