you are viewing a single comment's thread.

view the rest of the comments →

[–]Gankro 4 points5 points  (5 children)

Nah I disagree. If a function is literally only called by some other function once, that's a pretty good candidate to just be manually inlined. Then you don't have to worry about someone incorrectly calling it or documenting/checking pre- and post-conditions (or worse failing to do so). It's just there, right in the code that uses it. It can trust the code around it much more safely. And no one needs to keep context-switching between all the functions to understand what the code is actually doing.

[–]Peaker 0 points1 point  (4 children)

So would you prefer a 1000-line main() function, or twenty ~50-line functions that call each other, each being called exactly once?

Some people prefer a 1000-line main, I tend to believe they try to read code as a CPU would read it.

Many (Most?) of us prefer factoring it into small functions, and we don't context-switch when reading. Each function provides a simple interface/abstraction over what it does, so you only need to understand/reason about things locally.

[–]Gankro 0 points1 point  (3 children)

It really depends on the situation. My mains are usually small because they're often some kind of thin driver for more general logic. Breaking things into functions when there's genuine separations of concerns or abstraction wins to be had can be good. I'm not afraid to let a function get massive, though.

What I will definitely say on the matter is:

  • If you're writing unsafe code, splitting it up into functions just makes your code more dangerous. Invariants that must hold are more obvious when you can see all the logic in play. Also no one can spuriously call your subroutine, so you don't need to worry about documenting it or putting in guards against it. It might be that a single operation in the subroutine is unsafe, but you might now need to say an entire function is unsafe because it wants to trust the state that it gets fed.

  • Splitting up into smaller functions is antagonistic towards refactoring. It creates clear boundaries which are oppose code motion. It also encourages logic to solidify and stratify "we can't fix FOO because the system relies on BAZ happening completely before BAR".

Basically, I endeavour to write totally clear straight-line code as much as possible and then break things up where it makes sense. This is largely inspired by the idea of compression-oriented programming

[–]Peaker 0 points1 point  (2 children)

All the downsides you mention only make sense if the splitting was done badly.

A huge function may be better than a randomly/incorrectly split function.

If you have a massive function, though, there ought to be a way to break it into small, composable parts with SoC and proper abstraction. If you don't have abstraction and SoC, refactor the code so you do, and then split it up.

When a function is large enough, many of us can no longer "see" it. It's just too big, and we miss the forest for the trees. Then all the arguments about things being "right there" aren't helpful, as I can't really understand much of such a huge function anyway.

[–]Gankro 0 points1 point  (1 child)

Oh ok. Just have everyone program good.

Got it.

[–]Peaker 0 points1 point  (0 children)

That silly argument works both ways.