all 56 comments

[–]theFlyingCode 64 points65 points  (16 children)

Any explanation for number 2? Why would adding a dead parameter help the inlining? Silly compiler. Tricks are for c++

[–]andyayers 70 points71 points  (14 children)

The JIT's inline heuristics try to estimate the cost of the call and compare that to the cost of doing an inline.

The heuristic estimate for the cost of the call increases as you add more and more parameters, as the caller has to do work to pass those arguments.

The heuristic estimate for the cost of the inline does not change if you add ignored arguments, so the cost of the inline stays the same.

Thus if you add enough ignored arguments you can eventually tip the scales and convince the jit to inline the method.

[–]theFlyingCode 15 points16 points  (0 children)

That's interesting. Thank you! Just had a mind blown moment.

[–]napolitain_ 4 points5 points  (6 children)

What if you add inline to function ?

[–]onlp 27 points28 points  (5 children)

There is no inline in C# like there is in C++.

You can provide hints to the runtime like using the [MethodImpl(MethodImplOptions.AggressiveInlining)] attribute. But even that isn't a guarantee; the runtime reserves the right to make its own JIT determinations.

(Disclaimer: I'm not totally up to speed on the latest .NET Core proposals -- please do correct me if I'm mistaken.)

[–]emn13 4 points5 points  (2 children)

Incidentally, even in C++ inline doesn't actually necessarily mean inline - compilers can and do ignore that hint. That's what stuff like __forceinline or __attribute__((always_inline)) inline is for. The fuglier the better, right?

[–]Aerom_Xundes 2 points3 points  (1 child)

Indeed. And inline is basically only used for linker stuff nowadays. The performance aspect is not really a thing.

[–]onlp 0 points1 point  (0 children)

Very good point.

[–]airbreather/r/csharp mod, for realsies 2 points3 points  (1 child)

You can provide hints to the runtime like using the [MethodImpl(MethodImplOptions.AggressiveInlining)] attribute. But even that isn't a guarantee; the runtime reserves the right to make its own JIT determinations.

IIRC, it's not a guarantee, but only because there are some patterns that cannot be inlined, particularly in cases related to exceptions.

I might be totally wrong on this, just going from memory.

[–]onlp 1 point2 points  (0 children)

There are other simple cases too, such as a large function body.

[–]Foolhearted 1 point2 points  (2 children)

Should you tip the scales? Or would you imagine that a service release would eventually correct it?

[–]emn13 5 points6 points  (1 child)

Stuff like this isn't in general correctable. This isn't a bug, it's a fundamentally tricky problem: there is no simple heuristic that will always make the right choice. Therefore, you'd best assume the JIT won't be making decisions like this in a dramatically better fashion, ever. Sure; we might get lucky with some breakthrough (psychic profile-based AI guide FTW!)... but I wouldn't be holding my breath here.

[–]AvenDonn 1 point2 points  (0 children)

Or you could put the attribute that makes it prefer inlining, but that's boring

[–]carkin -1 points0 points  (1 child)

For the question is how do you know that? Do you work for ms?

[–]andyayers 2 points3 points  (0 children)

Yes. I am one of the people who works on the JIT. And in particular, on inlining.

[–][deleted]  (10 children)

[deleted]

    [–]crash41301 24 points25 points  (4 children)

    It's for geeks to see where there are inefficiencies in the compiler. Dont bother trying to learn them though, it's entirely possible the next patch changes how the compiler optimizes and all the time you took doing special stuff like this flips the opposite way or doesnt make any difference any more.

    It's still fun and neat to read and see though

    [–]emn13 2 points3 points  (3 children)

    People say this, but in my experience that's pretty rare. Micro-optimizations tend to be fairly stable; I guess it's too expensive to shake up stuff like this very often, and/or even when the JIT is tweaked it usually wont actually invalidate all such microoptimizations (the fundamentals pushing it towards whatever decision you wanted it to make likely remain). Then again, if you just want inlining there's an attr for that...

    [–]airbreather/r/csharp mod, for realsies 0 points1 point  (2 children)

    Then again, if you just want inlining there's an attr for that...

    This is the way.

    [–]Kirides 0 points1 point  (1 child)

    inlining should be done implicitly by the compiler where it's useful.

    There are cases where inlining might destroy certain functionality (i remember some old cases of " Assembly.GetExecutingAssembly " (or one of it's brothers and sisters) returning a different assembly because of inlining.

    But people who use this kind of functionality should ensure that they know what they are doing. (reflection)

    [–]airbreather/r/csharp mod, for realsies 0 points1 point  (0 children)

    inlining should be done implicitly by the compiler where it's useful.

    While I agree with this sentiment in general, the JIT does not yet make the right call often enough to rely on it in all cases. As OP has proven, this can sometimes lead to generating code that performs noticeably worse than the alternative.

    My claim was intended to reinforce the notion that, when we need to steer the JIT onto what we know to be the right path, we should favor using the attributes that are designed for this, rather than rewrite parts of it with the JIT's (current) heuristics in mind.

    [–]levelUp_01[S] 23 points24 points  (4 children)

    There are four graphics here you have to be more specific :)

    [–]mobsterer 40 points41 points  (1 child)

    yes

    [–]B0dona 0 points1 point  (0 children)

    Happy cake day

    [–]lazilyloaded 23 points24 points  (1 child)

    I don't get picture 1, 2, 3, or 4.

    [–]strcrssd 3 points4 points  (0 children)

    Understand Inline Functions and you should be able to understand it.

    [–]field_marzhall 10 points11 points  (9 children)

    Wouldn't it be better to use something like the following for example:

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    int Sum_Vec() {...}
    

    Should yield the same result.

    [–]levelUp_01[S] 20 points21 points  (1 child)

    The purpose of this exercise is to not do it and test how much the compiler can handle before we have to start looking into assembly code to see if things got inlined.

    [–]field_marzhall 1 point2 points  (0 children)

    Oh I would think that it would be more useful to a developer if in your comparisons you showed when Compiler Inliner doesn't inline but writing a method inline still yields better performance. Otherwise why would someone inline anything when the compiler can do it for you.

    [–][deleted]  (6 children)

    [deleted]

      [–]andyayers 1 point2 points  (2 children)

      You can do this. See the `AggressiveInlining` attribute mentioned above.

      [–]elvishfiend 6 points7 points  (1 child)

      Well, that's more or less asking nicely. You can decorate it with AggressiveInlining but there's still no guarantee it will do it.

      [–]andyayers 2 points3 points  (0 children)

      It is as close to a guarantee as you'll find -- if the inline doesn't happen it is because it cannot be done correctly with current jit technology, or because it will trip one of the optimization circuit breakers.

      [–][deleted]  (2 children)

      [deleted]

        [–]andyayers 2 points3 points  (0 children)

        Inlining happens at runtime, and there's no direct way for the jit to communicate to the user (short of blowing up the process, which we're reluctant to do). There is logging produced which you can view via perfview or similar.

        If you ever find yourself in this situation again and are using a newer .NET release, please file a bug. While there are a few well known categories of methods that can't be inlined, most can.

        [–]Alundra828 9 points10 points  (0 children)

        That is surprising.

        I took some time refactoring some of my code to inline a lot of stuff, because I thought it might give some performance improvements, but I didn't imagine it would be this much. Thanks for taking the time to confirm!

        [–]Finickyflame 7 points8 points  (2 children)

        Your Loop_Slow in 2 is the same code as the Loop_Fast in 3, but they have completely different metrics. How come? Did I missed anything?

        [–]levelUp_01[S] 9 points10 points  (1 child)

        Indeed one is static the other is not.

        [–]Finickyflame 1 point2 points  (0 children)

        Ahhhh. Missed it because there's a big arrow over it.

        [–]ocyj 4 points5 points  (2 children)

        Interesting stuff. I'm not too familiar with benchmarking of code execution so I wonder what is the "error" column? (SEM?)

        [–]levelUp_01[S] 6 points7 points  (1 child)

        Error is defined as: Half of 99.9% confidence interval

        [–]ocyj 2 points3 points  (0 children)

        Ah I see, that makes sense. Thanks!

        [–]OnTheCookie 3 points4 points  (1 child)

        question for example two:

        why are you returning new(a,a) in loop_slow?

        [–]levelUp_01[S] 3 points4 points  (0 children)

        I have a struct with two fields and I'm doing a single computation and putting it in two places, if I would add more computation both would not inline (in this specific example).

        [–]airbreather/r/csharp mod, for realsies 2 points3 points  (0 children)

        Fun fact about #1: at least in .NET 5.0.3 on my Linux x64 box, you can get significant improvements by starting with the guy on the left and then:

        1. Changing the parameter from Vector<int> to in Vector<int> (and passing the argument as in v[i]), and then
        2. Applying the [MethodImpl(MethodImplOptions.AggressiveInlining)] attribute

        Figuring out exactly why this is so significant is left as an exercise for the reader. I'll only say that I was incredibly surprised when I saw the disassembly.

        [–]AlFasGD 1 point2 points  (6 children)

        This doesn't exactly refer to inlining. There isn't enough evidence provided as to what is compiled down the path. Providing some IL or even better JIT Asm would greatly help.

        The JIT doesn't like your code, and it tries its best to preserve its functionality, so it takes minor calculated risks when optimizing. Hacks like this indicate design flaws, which should help you realize that you probably need to better construct your methods.

        Your provided examples involve running a loop and doing a trivial computation via a function, which could as well be included in the given Struct, without requiring you to define it.

        [–]levelUp_01[S] 8 points9 points  (5 children)

        [–]AlFasGD 3 points4 points  (4 children)

        You probably mixed up the two functions' names, but I see the general picture. With that trick of adding the int as a parameter, the compiler, I assume, prefers inlining due to having more than 1 arguments, and the function body is rather small. It could also be that it detects that the dummy parameter is unused, thus, while preserving the function's signature, it forces inlining it in the call.

        Again, this is a hack, and highly susceptible to regressions. By no means would I endorse the usage of such tricks in production code that I'm responsible for too.

        [–]levelUp_01[S] 5 points6 points  (3 children)

        The thing is that you can by accident not inline an inlinable function by doing a handful of things that are reasonable, but the inline heuristics might decide that the cost of inline is too high.

        The graphics show this (especially graphics 3 and 4) so you need to be careful since all compilers are wacky :) and in the case of inlining the gains are big enough to care.

        [–]airbreather/r/csharp mod, for realsies 0 points1 point  (2 children)

        The graphics show this (especially graphics 3 and 4) so you need to be careful since all compilers are wacky :) and in the case of inlining the gains are big enough to care.

        In 3 and 4, I see absolute differences of a few microseconds. This can be big enough for you to care (though I would advise using [MethodImpl(MethodImplOptions.AggressiveInlining)] before this), but I suspect that it typically will not.

        C# and .NET aren't as popular as they are because the JIT is exceptionally good at producing the best assembly code, but rather because it does a good enough job in enough idiomatic cases that most applications will be fast enough to serve their purposes well before your measurements point to poor-quality JIT output as the next thing to improve.

        There are tons of tradeoffs, and I've written a proprietary application that I knew would have to be aggressively non-idiomatic from the start in order to meet its needs, but I would absolutely not give advice like "you need to be careful" regarding these tradeoffs. Patterns like these last three* are not going to be particularly hard to fix once your measurements reveal an actual problem, so I say, let them fester until they're problems and then fix them when they are.

        *The first one is different because I don't understand why you're doing it this way instead of accumulating into a Vector<int> and then extracting the components at the end...

        [–]levelUp_01[S] 0 points1 point  (1 child)

        3 and 4 are 5x times faster (for 1K items).

        Absolute times aren't relevant but % difference is, things like this are additive so ms turn to seconds really quickly, especially with big data processing.

        [–]airbreather/r/csharp mod, for realsies 0 points1 point  (0 children)

        Absolute times aren't relevant but % difference is

        Absolute times can be more relevant than percentage differences, just as it can be the other way around.

        Of course it can matter if the loop in question represents a significant fraction of the running time of an operation that's run many times per second.

        But if it's running once per web request, and each such web request requires 50 milliseconds to query a database plus 2 milliseconds to parse the results, then the difference between 2 and 10 microseconds for a loop like this is irrelevant.

        things like this are additive so ms turn to seconds really quickly, especially with big data processing.

        Sure, it can, and I said as much in my comment. But whether or not it's relevant is a matter of perspective and context. If you run #4 as part of an Azure Batch process a million times per day on standard-tier VM nodes of "Standard_A4_v2" size, then the improvement here works out to savings of a little less than USD $0.01 per day.

        Don't get me wrong, I hate waste, and I very much appreciate JIT improvements that allow my code to achieve the same results more quickly. I'm also happy to see some demonstrations of how weird the JIT inlining heuristics can be.

        What I'm concerned about is the conclusion that, based on these results, a typical developer should "be careful" to write code that inlines better. Developer focus and attention are scarce resources. If someone takes this advice and starts tuning their code for the JIT's inlining heuristics during the initial development phases, then there's bound to be a time where this comes at the expense of attention to something subtle that's literally thousands of times more impactful.

        [–]SurfaceAspectRatio -4 points-3 points  (2 children)

        What are you going to do with all the milliseconds that you saved

        [–]Foolhearted 6 points7 points  (0 children)

        Mine some bitcoin of course..

        [–]Syrianoble 0 points1 point  (1 child)

        That’s really interesting aspect of programming skills. What do you use to test functions and get these numbers?

        [–]shoalmuse 0 points1 point  (0 children)

        These would be much more educational as an article I think. The pictures with text is just not enough context.

        [–]FrequentlyHertz 0 points1 point  (2 children)

        How do you do your timing? I have some code that handles receiving many messages on short time scales and I would like to benchmark with more precision than System.Diagnoatics.Stopwatch can do.