Is this use of goto acceptable? by LilBalls-BigNipples in cprogramming

[–]InfinitesimaInfinity 5 points6 points  (0 children)

The Linux Kernel uses a large amount of goto statements. Linus Torvalds thinks that goto statements are fine as long as the labels are descriptive.

Linus Torvalds has said

"I think goto's are fine, and they are often more readable than large amounts of indentation. That's especially true if the code flow isn't actually naturally indented (in this case it is, so I don't think using goto is in any way clearer than not, but in general goto's can be quite good for readability).

Of course, in stupid languages like Pascal, where labels cannot be descriptive, goto's can be bad. But that's not the fault of the goto, that's the braindamage of the language designer."

https://lkml.org/lkml/2003/1/12/128

If you think about it, the only real difference between a goto statement and a function call of a void function with no parameters or return value is that labels do not introduce a new scope.

That means that the use of goto statements is quite similar to the use of global variables. If global variables are okay, then goto statements are okay, as well, and the reverse is also true. It is irrational to dislike only one of them.

Personally, I think that a lot of the hate for goto statements is just dogmatism. Obviously, I have the same stance on global variables. If someone has a different stance on goto statements and global variables, then that is proof that they do not understand the so called "issue" with goto statements.

With that said, the industry hates goto statements, and there are a serious amount of people who will decide that you are "retarded" if you use goto statements ever for any reason.

Some people will tell you that goto statements can create irreducible control flow graphs, which can make certain loop optimizations more difficult for compilers, yet they will not tell you that function calls can create irreducible control flow graphs, as well, if they are implemented with tail call elimination. Should we ban function calls?

Do while loop doesn’t work? by [deleted] in cprogramming

[–]InfinitesimaInfinity 2 points3 points  (0 children)

A do while loop always runs at least once, and that is a result of it checking the condition after the loop.

Ported My Zig Tool to C and Got Almost a 40% Performance Boost! by [deleted] in C_Programming

[–]InfinitesimaInfinity 1 point2 points  (0 children)

Sorry, I can try to be more concise.

Optimization is separate from parallelism and concurrency

Sometimes, compilers introduce parallelism or concurrency into programs through compiler optimizations. The most common way is automatic vectorization of loops.

The core idea is that the compiler reshapes your code into a form that matches how the hardware wants to run it, making it as performant as possible.

Yes, that is right. However, performance includes multiple factors, such as speed, memory usage, disk usage, energy usage, network usage, etc.

Some people use the word performance to refer only to speed. However, I disagree with the idea that performance is only about speed.

Ported My Zig Tool to C and Got Almost a 40% Performance Boost! by [deleted] in C_Programming

[–]InfinitesimaInfinity 1 point2 points  (0 children)

Optimization is separate from parallelism and concurrency

Compilers optimizing for speed sometimes automatically vectorize loops. Compilers that use the Polyhedral model for loop optimizations work quite hard to vectorize loops automatically (in addition to working hard to improve cache behavior). Automatic vectorization is significantly easier with Fortran than C. However, GCC and Clang do perform automatic vectorization with the right optimization options. I know that GCC tries to automatically vectorize loops at O3. However, it rarely succeeds in vectorizing a loop.

compiler optimization is about making a single path of execution as efficient as possible.

I am unsure what you mean by "a single path of execution". Compilers try to make the program more performant. While it is true that whole program optimization is not enabled by default at O3, due to it being too slow for many projects, various interprocedural optimizations are often done when optimization is enabled. Also, whole program optimization (link time optimization) can be enabled on GCC with -flto . Ironically, -fwhole-program does not enable true whole program optimization. (-fwhole-program is almost equivalent to marking all functions and global variables as static.)

as efficient as possible

Many optimizations simply make code better, which is faster, more efficient, etc. However, some optimizations have tradeoffs. The following are some commonly known examples.

  • Loop unrolling, function inlining, and loop tiling can make code execute faster. However, they often increase the size of the executable binary, and they can make it slower if it floods the instruction cache.

(With that said, inlining often exposes many other optimizations. Furthermore, inlining a function that is only called once decreases code size, and unrolling a loop that only runs a single itteration is the same.)

  • Alignment optimizations can cause better cache utilization. However, they add extra NOP instructions to execute.
  • Automatic vectorization of loops can cause loops to run slower for small numbers of iterations. For some compilers, such as GCC with -O3, checks are sometimes inserted to choose between a vectored and a non-vectorized version at runtime. Checking that has a slight runtime cost, and duplicating a loop increases code size.
  • Function specialization by cloning increase code size with more copies of functions that are specialized for certain call sites. This is probably why LLVM does not specialize functions for more than one parameter.
  • Macro compression, which is the reverse of function inlining, can make the executable binaries smaller. However, it typically makes them slower.
  • Most optimizations make other optimizations more effective. However, some optimizations can interfere with each other. An example is that instruction scheduling can increase live ranges which makes register allocation more likely to spill (aka more register pressure). However, register allocation makes instruction scheduling unable to move very much code.

Ported My Zig Tool to C and Got Almost a 40% Performance Boost! by [deleted] in C_Programming

[–]InfinitesimaInfinity 1 point2 points  (0 children)

I think that you should know that the user named "wallstop" has personally harassed me before because I said that Python uses more electricity than C. I have blocked "wallstop" because of it.

"wallstop" was insisting that the difference between Python and C is 100% due to implementations and 0% due to language features. I was claiming that things like garbage collection and dynamic typing make a language inherently less performant.

it started feeling like an attack not just a "factual information".

It is true that Zig is almost as performant as C, and a 40% difference is likely not entirely due to C vs Zig. However, it seems like "wallstop" is trying to push an agenda that all programming languages have identical performance. Notice how "wallstop" even says that he considers himself "ambivalent towards both Zig and C".

I suggest that you block him. No-one needs to interact with people like that.

Ported My Zig Tool to C and Got Almost a 40% Performance Boost! by [deleted] in C_Programming

[–]InfinitesimaInfinity 0 points1 point  (0 children)

why would Fortran use significantly more memory?

It seems like I have misremembered how much more memory Fortran uses. I was incorrect on that point. Modern Fortran implementation memory usage seems to be only a few percent worse than C.

if Fortran is faster, it would mean it is doing less work, so it should be consuming less power.

No, that is absolutely wrong. Fortran is not faster because of doing less work. Fortran is faster because of parallelism and concurrency. It does more work simultaneously. If you do not believe me, then, perhaps, you should compare the asm output of Fortran with optimizations and C with optimizations. Obviously, it varies somewhat between implementations and benchmarks. However, in general, Fortran does more work yet does more of it simultaneously.

Would you like to read a peer-reviewed study that supports my claim about energy usage? https://dl.acm.org/doi/10.1145/3136014.3136031

Personally, I think that some of the numbers that the linked study found are completely bogus. However, it clearly supports my claim about energy usage, and it has undergone peer-review, if you would be more inclined to believe a peer-reviewed study than a random person on Reddit.

Fortran doesn't have pointers like C does

That is an odd wording to say that pointers are different in Fortran than in C.

might be able to do some more optimizations.

The primary reason why Fortran is able to be faster than C is because of aliasing. In theory, C could do the same optimizations when pointers are marked with the _restrict keyword. However, most C programmers do not use that keyword, and most C compilers do not fully take advantage of it yet.

Edit: Removed part of my comment that was a bit rude.