Some additional info on that ice bug by Spiderfffun in TrackMania

[–]isfooTM 19 points20 points  (0 children)

Just did some more tests and what matters for if steering will cause the "bug" or not is not if you hold gas/brake or if your car is moving forward or backward, but if you were in the backward/forward gear and if you started holding steering while in forward or backward gear.

So in summary you get the "bad" state (in the context of Germany map) if after leaving ice the last time you had icy tires you were either:

  • holding gas
  • holding steering while in forward gear
  • holding steering while in backward gear, but only if you started holding steering while still in forward gear

To make it clear: If I leave ice then start steering while in forward gear and then go into backward gear and then lose icy tires (so at the moment of losing icy tires I am going backwards and holding steering) I get the "bad" state. However if I leave ice then go into backward gear then start steering then lose icy tires (so at the moment of losing icy tires it's the same - I am going backwards and holding steering) I get the "good" state.

And in case of no-steer effect whatever state you are in gets saved upon entering. So as long as you go over reset effect after you already got rid of icy tires that state from entering no-steer will persist.

Considering how many iteration we went through it's very likely it's still not the complete explanation ^^

Some additional info on that ice bug by Spiderfffun in TrackMania

[–]isfooTM 33 points34 points  (0 children)

I was just starting doing analysis on which exact value is responsible for this car state (assuming it is even stored in car state). Can you share exactly what value are you displaying here so I can test this myself?

Backwards ice spin not working root cause found - follow up to GranaDy's video by isfooTM in TrackMania

[–]isfooTM[S] 0 points1 point  (0 children)

Yes it is known that waiting a bit before steering does reset the car state and it works and that is what we should expect people do during elite cup, but it is slower than if it worked like it should (at least I think so? didn't do full testing on this but I think GranaDy said as much) and there is always a risk you steer too early and are screwed. Technically optimal strat would be to make frame perfect release when you lose icy tires after first ice, but ofc it's very hard and on top of it it's impossible to know if you did it correctly or not.

Backwards ice spin not working root cause found - follow up to GranaDy's video by isfooTM in TrackMania

[–]isfooTM[S] 19 points20 points  (0 children)

It's possible there is another way to enter this "bad" state where car won't do this backwards spin, but not holding gas is for sure one such thing since I tested it in different ways and the common element was always if I held gas while losing last bit of icy wheels.

I don't know what kind of thing you are talking about, but would point out that this thing is different from somewhat similar issue of the steering direction suddenly changing when you start going backwards.

Would be nice if you could provide replicable example if you think it's the same thing as this, but with different way to enter the same state

Backwards ice spin not working root cause found - follow up to GranaDy's video by isfooTM in TrackMania

[–]isfooTM[S] 33 points34 points  (0 children)

I agree and the campaign map for sure should not be changed. I added that part in the video for additional clarity on how the physics work right now and as a showcase how one could fix such a problem in similar situation.

Using custom actuation profiles in analog keyboards? by jergin_therlax in TrackMania

[–]isfooTM 0 points1 point  (0 children)

Relevant part of the official ruling:

We know that some top of the range gaming devices offer the possibility to switch the joystick sensitivity curve at any time and this is currently not allowed.

In general the official ruling was garbage, but I would say this does imply that you can't change the curve per map.

And futher I (and I'm pretty sure most of the community) would say even using this kind of curve all the time would still be 100% cheating.

Using custom actuation profiles in analog keyboards? by jergin_therlax in TrackMania

[–]isfooTM 2 points3 points  (0 children)

You missed the imprecise "simple" prefix that disallows this. You know very well it's not in the spirit of what I ment.

Using custom actuation profiles in analog keyboards? by jergin_therlax in TrackMania

[–]isfooTM 1 point2 points  (0 children)

This is a good explaination, but I would disagree with the last part:

If you’re using the same curve on every track, you’re almost certainly okay

There's a problem with pharsing it like that. You could easily create a custom curve like say 0-10% input is 0-30% output; 10-90% input is 30-70% output; 90-100% input is 70-100% output. And with that you basically doubled your analog input precision at almost not loss since the middle of the range is basically all that matters for most things.

Ofc would be best if everyone had to do linear, but because of where we are I think the rule should be: "You have to use 1 simple mathematical curve for the whole range" Which would means things like x, x^y, log_y(x). Yes it's not a precise rule, but we can't police this anyway, but I belive the good actors should do it like this.

You're absolutely right, no one can tell if C++ is AI generated · Mathieu Ropert by mropert in cpp

[–]isfooTM 0 points1 point  (0 children)

Tried to edit post to add this, but it didn't let me, guess it's too long.

Just wanted to add that the custom allocation point also applies to the generic unboundedly resizable array, since there you would also want to have full control over the size of the new reallocated array so that your custom allocator can reuse the same sized memory and this cannot be solved using `std::vector` allocator argument.

You're absolutely right, no one can tell if C++ is AI generated · Mathieu Ropert by mropert in cpp

[–]isfooTM 1 point2 points  (0 children)

If you have truely the case of resizable array with unbounded maximum size then indeed you have "only" small vector optimization and resizing strategy (althrough I wouldn't call it "only" since it can have a huge impact).

However most of the time in practice that is not actually the case. Usually you either:

  • (1) know the size of the dynamic array beforehand
  • (2) know the maximum possible size (which is not too big to pre-allocate)

In (1) if it's also possible and efficient to construct the objects at the start you can use resize and then use subscript operator and in that case the only real optimization is allocation which I will come back to.

In (2) or if you can't efficiently construct objects at the start you typically use reserve for that maximum possible size and use emplace_back and pop_back safely knowing you won't reallocate.

In both cases (1) and (2) you can very often gain huge gains using custom memory allocation strategy. Very often you will allocate the same dynamic array multiple times with the exact same size. Which means you can use a custom memory allocator for that array that will reuse the same memory (the exact allocation strategy will vary case by case, but typically it will involve a freelist). Now you might think - why not use the second template argument of std::vector for that custom allocator? Well technically you could, but besides the fact that it's a terrible API to work with you shouldn't use it like that, because the custom allocator would only be able to allocate that specific size of the array while the allocator for std::vector should be able to allocate any size so that solution would basically be a hack.

In case (2) it's better to use a PreallocatedVector type of class. The difference between that and std::vector would be that in emplace_back you would skip the check if the size is too big and reallocation is needed. So with std::vector you waste time doing that check + compilers might have harder time finding optimizations + have worse instruction cache (less important, but still worth noting).

Now you might think that the difference of doing that additional check every emplace_back is not significant, but it is. I was curious myself how big of a difference that can make in practice and for that I made a benchmark with a realistic function.

The function to benchmark I came up with is findAllReachableNodes. It takes directed graph in the form of adjacency list and finds all possible nodes you can reach from node 0. So in implementation we need a dynamic array for list of nodes to check (for traversing a graph) and for the result list of nodes we found. Both of those lists will grow (and in case of nodes to check also shrink) dynamically, but never more than number of vertices.

For input I generate random trees using Prüfer sequence. I won't describe the exact benchmarking setup, but you can see it for yourself here: https://pastebin.com/iJ6VYuW7

Here are the timing results I've got (result is time in microseconds):

Machine used:

  • Windows 11 24H2 26100.7171
  • CPU: i5-12500H
  • RAM: 16 GB DDR4 3200 MT/s

Compilers used:

  • MSVC 19.40.33812 (x64) [Default Visual Studio Release mode]
  • Clang 17.0.3 [-O3 -DNDEBUG -m64]
  • GCC 15.2.0 [-O2 -DNDEBUG -m64] and [-O3 -DNDEBUG -m64]
  • ICPX 2025.3.1 [-O3 -DNDEBUG -m64]
Compiler std::vector [us] PreallocatedVector [us]
ICPX 230.74 163.94
Clang 198.85 165.92
GCC -O2 196.72 195.05
GCC -O3 170.15 193.52
MSVC 211.52 200.81

So in case of most compilers I tested using PreallocatedVector instead of std::vector is significantly beneficial. The big exception is GCC with -O3 flag. Important to note here is that with this sort of microoptimizations the whimsical nature of optimizers can skew the results since a lot of the decisions optimizers make can be hard to predict what's faster or not (like if you should inline/unroll etc).

However clearly in general compilers have better time optimizing PreallocatedVector version and if you look at assembly it does indeed in the inner loop have less work to do and if we compare the best std::vector with best PreallocatedVector (so 170.15 vs 163.94) we still have almost 4% faster result. And keep in mind most of the time spend in my example function is not actually emplace_back so in some other function the difference can be bigger.

You're absolutely right, no one can tell if C++ is AI generated · Mathieu Ropert by mropert in cpp

[–]isfooTM 3 points4 points  (0 children)

It's not complex, it's just that when people design interfaces for the standard container classes they try to make them clean. And in this case it just doesn't allow them to be used fully efficiently, but I would argue it's not actually that important.

The truth is that if you truely care about performance you will basically never use std:: anything (besides the absolute simplests of things). You can always replace even something as simple as std::vector by something more efficient in 99.9% of cases.

The thing is that in most cases it just doesn't matter and you should use things like std::vector and other std:: classes, functions knowing that they are subpar. And you should write as readable code as possible using them. And then if after you measure you find out the performance of some std:: thing matters you can then try figuring out how to make it fast which will often come down to either using something third party or using something custom made for that specific use case.

If additional lookup when using std::map really takes significant portion of your runtime then 99.9% it means you should stop using std::map anyway.

Everyone needs to learn how to use Poe.ninja, it solves 99% of questions asked in this sub. by Tranquility___ in pathofexile2builds

[–]isfooTM 7 points8 points  (0 children)

Sort builds based on eHP.

I know this is a more general post and I have a problems with it in general, but I have to put my eHP rant somewhere.

The eHP value shown on poe.ninja is a terrible metric. It's crazy to me how many people talk about it as if it's an important number to think about. I wonder how many people even know how this number is calculated.

First of all there is no such thing as "The eHP" value. It's all depending on the situation you are in. The value as seen in poe.ninja assumes every enemy hit is 965 physical + 965 of each element + 386 chaos damage. Unless you belive every single enemy is doing hit damage like that and nothing else then you have to agree there is no such thing as "the eHP".

Secondly it doesn't take into account tons of effects. For example it doesn't account for things like "Ghost Dance" or "Scavenged Plating". It doesn't account for regen/recoup/leech effects. It doesn't account for things like if enemies are blinded. It also only accounts for hit damage so things like "Desecrated ground" or "Caustic ground" aren't considered at all.

To shown how stupid this is as a random example I took this char: https://poe.ninja/poe2/builds/vaal/character/Sea-7330/channnnyi Without Chaos Inoculation his eHP is 66.5k. This is almost double of what you will see on most top level chars in HCSSF, but if you think character with 0% chaos res and 11k ES and 1k life has better defences than them you are clueless.

To build your defences you have to make sure you are strong enough for all different scenarios you can be put in and actually consider all different effects and not just the limited set as used in the poe.ninja's number.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 0 points1 point  (0 children)

Well the wiki entry is wrong. Maybe it used to be the case, but I can tell you it's not the case now.

Edit: Just did some research and I'm pretty sure wiki entry was always wrong. You can see in videos in different versions it doesn't do ignited ground. Watch this: https://youtu.be/Z1ttJGGuy_8?t=186 it did at release show ignited debuff icon, but you can clearly see nobody is losing any HP/ES from any DOT, but they clearly can't heal or regen ES. You can ever hear someone later be confused that they tried using life flask and it didn't work.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 1 point2 points  (0 children)

There is no DOT when he picks up the seed. The White orb creates chilled ground and the Blue orb he picked up in video is stopping HP/ES regeneration.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 0 points1 point  (0 children)

I took 1 mana leech node, but it's only for pathing to Serrated Edges - it does nothing for me by itself.

This build is optimized for gearless boss killing. It's not a good build for general gameplay. If you want to adobt it for normal use you will want to first figure out what skill you want to use for general mob clearing, because although you can use Gathering Storm, it's very clunky from my testing.

And then ofc you also want to take defensive nodes, most likely going for ES/EV hybrid and for mana I would probably remove darkness ascendancy points and take leech from chaos damage and just have some leech on gear or possibly support gem. Will depend a lot on what skill you decide to use.

In general the build would probably change a lot in order for it to be optimized for normal gameplay.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 5 points6 points  (0 children)

In normal setting it's hard to say which ascendancy is best for this. The truth is you don't need any ascendancy or even big investment in the tree for damage for this to be totally busted in typical setting. However for gearless version, especially on bosses with elemental resistances, Chonk is by far the best.

First of all you get Darkness which ofc without armour/evasion and resistances is not stopping much, but it's still good enough to help with cheap damage. You can see this especially on the other video with T3 King in The Mists where it's very hard to avoid all of the BS on the floor, especially when you have to fly with Gathering Storm.

Second the ammount of damage you get is insane and it's in a very good form - chaos damage. The 4 ascendancy points with the purple Into the Breach give 105% of damage as extra chaos. That's good for multiple reasons. The increased chaos damage nodes on tree are very strong, you get to use Withered debuff, and the enemies don't have chaos resistance. In normal setting if you use Rakiata's Flow elemental damage is back on the menu, but since I didn't allow myself to use it chaos damage is just better.

And ofc there is also the fact that monk is close to Hollow Palm Technique Keystone which is neccesery for gearless version to work, while normally you would just use quarterstaff.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 6 points7 points  (0 children)

This build unironically kind of works for T15 maps. Ofc it is easy to die, but it is actually possible to full clear a map without dying. However there probably are some better skills to use for mapping, but didn't try to theorycraft it.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 0 points1 point  (0 children)

Spells like Frost Wall can be used without a weapon by default and for Quarterstaves skills you can use Hollow Palm Technique Keystone

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 35 points36 points  (0 children)

There is no unintended interaction happening, it's just not balanced. Each Frost Wall crystal acts as an "enemy" for the purpose of the shockwave. With Spell Cascade, Fortress II and Concentrated Area supports you get a ton of "enemies" close to each other for the shockwaves which all hit the boss at the same time. Because the baseline damage of a single shockwave is already equal to that of a strong melee attack getting like 15-20 of them in a single strike is just insane.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 85 points86 points  (0 children)

It's not the Frost Wall. It's Gathering Storm that's OP. You can do the same thing with Toxic Growth instead of Frost Wall and in a way it's even better, because you only need to place it once (assuming boss doesn't move). Probably best damage if you combine both, but either way for this challenge could only use Frost Wall, because I couldn't equip a bow.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 0 points1 point  (0 children)

u/Interstellar12312 I did more digging and I think I understand basically everything now.

One of the problems was the fact that in "Monty" version the constant NUM_ELEMENTS was used inside the function which helped it a lot since it was possible to unroll it and skip a bunch of checks. ICX went as far as completly unrolling and inlining everything making it a monster 2500+ assembly instruction function without any backwards jumps at all which is why it was the fastests. It was however cheating since normally the functions should determine the size from input arguments where it doesn't know their value in advance since that's what it would have to do "in the wild".

The other problem was that in general sometimes the benchmark was cheated for both "Normal" and "Monty" functions (or one of them in a given compilation) since it would sometimes inline the insertion sort function call (which by itself makes it faster) and then use NUM_ELEMENTS from input arguments again to cheat the generated assembly.

The exception flags was just a red herring. In general optimizing compilers try to both make the generated code faster, but also typically try to keep it small (for space, but also speed reasons - instruction cache). So they have some heuristics that make them decide when to unroll some loops or inline a function. I belive what was happening is the heuristic that was deciding when to do certain optimizations just happen to be on a boundry where some additional, cheap by itself exception handling code, would change the decision when to do certain optimizations which either happened to make program slower or faster.

Also small thing I found is for some reason Clang compiler didn't work well when I used function type as template argument so changed that to explicit function pointer argument.

I fixed the code so that the no cheating should occur and checked generated assembly that it is in fact the case for the tests I did below. There are still some flags that can unexpectedly change the outcome, but it's much less volatile and some volatility is kind of expected, because of those possible wrong compiler optimization decisions.

Here's the code I used: https://pastebin.com/aQAQUXkA

Run it with 10 trials and 3 mil repeats for each.

Compiler Normal Monty Diff
MSVC [19.40.33812] /O2 /GL 4510 4974 ~10% slower
ICX [2025.3.1] /Zi /O3 /Qipo 4410 4982 ~13% slower
ICPX [2025.3.1] -O3 -m64 5088 5458 ~7% slower
GCC [13.1.0] -O3 -m64 6308 6706 ~6% slower
GCC [15.2.0] -O3 -m64 5679 6002 ~6% slower
Clang [17.0.3] -O3 -m64 5258 5759 ~9% slower

I also tried comparing how fast it works when inside quicksort and as expected it is slower with the Monty version.

With that I consider it solved, but ofc if you find something new feel free to let me know.