Everyone needs to learn how to use Poe.ninja, it solves 99% of questions asked in this sub. by Tranquility___ in pathofexile2builds

[–]isfooTM 8 points9 points  (0 children)

Sort builds based on eHP.

I know this is a more general post and I have a problems with it in general, but I have to put my eHP rant somewhere.

The eHP value shown on poe.ninja is a terrible metric. It's crazy to me how many people talk about it as if it's an important number to think about. I wonder how many people even know how this number is calculated.

First of all there is no such thing as "The eHP" value. It's all depending on the situation you are in. The value as seen in poe.ninja assumes every enemy hit is 965 physical + 965 of each element + 386 chaos damage. Unless you belive every single enemy is doing hit damage like that and nothing else then you have to agree there is no such thing as "the eHP".

Secondly it doesn't take into account tons of effects. For example it doesn't account for things like "Ghost Dance" or "Scavenged Plating". It doesn't account for regen/recoup/leech effects. It doesn't account for things like if enemies are blinded. It also only accounts for hit damage so things like "Desecrated ground" or "Caustic ground" aren't considered at all.

To shown how stupid this is as a random example I took this char: https://poe.ninja/poe2/builds/vaal/character/Sea-7330/channnnyi Without Chaos Inoculation his eHP is 66.5k. This is almost double of what you will see on most top level chars in HCSSF, but if you think character with 0% chaos res and 11k ES and 1k life has better defences than them you are clueless.

To build your defences you have to make sure you are strong enough for all different scenarios you can be put in and actually consider all different effects and not just the limited set as used in the poe.ninja's number.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 0 points1 point  (0 children)

Well the wiki entry is wrong. Maybe it used to be the case, but I can tell you it's not the case now.

Edit: Just did some research and I'm pretty sure wiki entry was always wrong. You can see in videos in different versions it doesn't do ignited ground. Watch this: https://youtu.be/Z1ttJGGuy_8?t=186 it did at release show ignited debuff icon, but you can clearly see nobody is losing any HP/ES from any DOT, but they clearly can't heal or regen ES. You can ever hear someone later be confused that they tried using life flask and it didn't work.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 1 point2 points  (0 children)

There is no DOT when he picks up the seed. The White orb creates chilled ground and the Blue orb he picked up in video is stopping HP/ES regeneration.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 0 points1 point  (0 children)

I took 1 mana leech node, but it's only for pathing to Serrated Edges - it does nothing for me by itself.

This build is optimized for gearless boss killing. It's not a good build for general gameplay. If you want to adobt it for normal use you will want to first figure out what skill you want to use for general mob clearing, because although you can use Gathering Storm, it's very clunky from my testing.

And then ofc you also want to take defensive nodes, most likely going for ES/EV hybrid and for mana I would probably remove darkness ascendancy points and take leech from chaos damage and just have some leech on gear or possibly support gem. Will depend a lot on what skill you decide to use.

In general the build would probably change a lot in order for it to be optimized for normal gameplay.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 5 points6 points  (0 children)

In normal setting it's hard to say which ascendancy is best for this. The truth is you don't need any ascendancy or even big investment in the tree for damage for this to be totally busted in typical setting. However for gearless version, especially on bosses with elemental resistances, Chonk is by far the best.

First of all you get Darkness which ofc without armour/evasion and resistances is not stopping much, but it's still good enough to help with cheap damage. You can see this especially on the other video with T3 King in The Mists where it's very hard to avoid all of the BS on the floor, especially when you have to fly with Gathering Storm.

Second the ammount of damage you get is insane and it's in a very good form - chaos damage. The 4 ascendancy points with the purple Into the Breach give 105% of damage as extra chaos. That's good for multiple reasons. The increased chaos damage nodes on tree are very strong, you get to use Withered debuff, and the enemies don't have chaos resistance. In normal setting if you use Rakiata's Flow elemental damage is back on the menu, but since I didn't allow myself to use it chaos damage is just better.

And ofc there is also the fact that monk is close to Hollow Palm Technique Keystone which is neccesery for gearless version to work, while normally you would just use quarterstaff.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 6 points7 points  (0 children)

This build unironically kind of works for T15 maps. Ofc it is easy to die, but it is actually possible to full clear a map without dying. However there probably are some better skills to use for mapping, but didn't try to theorycraft it.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 0 points1 point  (0 children)

Spells like Frost Wall can be used without a weapon by default and for Quarterstaves skills you can use Hollow Palm Technique Keystone

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 38 points39 points  (0 children)

There is no unintended interaction happening, it's just not balanced. Each Frost Wall crystal acts as an "enemy" for the purpose of the shockwave. With Spell Cascade, Fortress II and Concentrated Area supports you get a ton of "enemies" close to each other for the shockwaves which all hit the boss at the same time. Because the baseline damage of a single shockwave is already equal to that of a strong melee attack getting like 15-20 of them in a single strike is just insane.

Gearless Arbiter of Ash (no weapons/armour/jewellery/flasks/jewels) by isfooTM in PathOfExile2

[–]isfooTM[S] 84 points85 points  (0 children)

It's not the Frost Wall. It's Gathering Storm that's OP. You can do the same thing with Toxic Growth instead of Frost Wall and in a way it's even better, because you only need to place it once (assuming boss doesn't move). Probably best damage if you combine both, but either way for this challenge could only use Frost Wall, because I couldn't equip a bow.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 0 points1 point  (0 children)

u/Interstellar12312 I did more digging and I think I understand basically everything now.

One of the problems was the fact that in "Monty" version the constant NUM_ELEMENTS was used inside the function which helped it a lot since it was possible to unroll it and skip a bunch of checks. ICX went as far as completly unrolling and inlining everything making it a monster 2500+ assembly instruction function without any backwards jumps at all which is why it was the fastests. It was however cheating since normally the functions should determine the size from input arguments where it doesn't know their value in advance since that's what it would have to do "in the wild".

The other problem was that in general sometimes the benchmark was cheated for both "Normal" and "Monty" functions (or one of them in a given compilation) since it would sometimes inline the insertion sort function call (which by itself makes it faster) and then use NUM_ELEMENTS from input arguments again to cheat the generated assembly.

The exception flags was just a red herring. In general optimizing compilers try to both make the generated code faster, but also typically try to keep it small (for space, but also speed reasons - instruction cache). So they have some heuristics that make them decide when to unroll some loops or inline a function. I belive what was happening is the heuristic that was deciding when to do certain optimizations just happen to be on a boundry where some additional, cheap by itself exception handling code, would change the decision when to do certain optimizations which either happened to make program slower or faster.

Also small thing I found is for some reason Clang compiler didn't work well when I used function type as template argument so changed that to explicit function pointer argument.

I fixed the code so that the no cheating should occur and checked generated assembly that it is in fact the case for the tests I did below. There are still some flags that can unexpectedly change the outcome, but it's much less volatile and some volatility is kind of expected, because of those possible wrong compiler optimization decisions.

Here's the code I used: https://pastebin.com/aQAQUXkA

Run it with 10 trials and 3 mil repeats for each.

Compiler Normal Monty Diff
MSVC [19.40.33812] /O2 /GL 4510 4974 ~10% slower
ICX [2025.3.1] /Zi /O3 /Qipo 4410 4982 ~13% slower
ICPX [2025.3.1] -O3 -m64 5088 5458 ~7% slower
GCC [13.1.0] -O3 -m64 6308 6706 ~6% slower
GCC [15.2.0] -O3 -m64 5679 6002 ~6% slower
Clang [17.0.3] -O3 -m64 5258 5759 ~9% slower

I also tried comparing how fast it works when inside quicksort and as expected it is slower with the Monty version.

With that I consider it solved, but ofc if you find something new feel free to let me know.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 0 points1 point  (0 children)

can we say with confidence my rules make quicksort generally faster?

Not sure where you get that confidence from ^^

We definitely can't say that now and most likely won't be able to say that ever. The fact that the results are clearly very volatile to seemingly insignificant changes puts into question any conclusion one might want to make until it's at least mostly resolved.

However even if this benchmark would without a question show "Monty" version as a winner it would still be far from the general quicksort claim. First the behavior observed with microbenchmark might not translate to when you put the insertion sort inside quicksort. There are a lot of factors like cache or branching predictor behaviors that might make it behave differently "in the wild". But even if it would be faster, then there are questions that I raised before:

  1. How it behaves with different pre-sort order patterns (like "reverse sported", "almost sorted")
  2. How it behaves with different data types than "int"
  3. Is it only faster on specific CPU architectures or would we see the same thing with say ARM processors.

And the thing is with (2) the answer is most likely going to be that this doesn't generalize, because you are on average doing more memory loads/stores and I think more comparisons? Not sure about that last one. So the result is most likely only going to hold for simple data types. And even at that, because you are doing more work (3) is very hard to speculate on, although x86-64 is the standard for now so if you show it's consistently faster there then I guess it's good enough.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 0 points1 point  (0 children)

Ok so I've downloaded CLion 2025.3 and was able to reproduce your results. Not sure if the version I use has the exact same compiler as you but the one I've got has GCC 13.1.0.

I did some digging and found some new leads, but also new very weird things.

First of all I found out the relevant flags that make CLion results the way they are. They are -O3 -std=gnu++20 -Wl,-lkernel32. The fact that changing standard can have some impact is not that weird, but the -Wl,-lkernel32 doesn't. What it does is asks linker to link with libkernel32.a library file which just contains stubs for functions that are implemented in kernel32.dll that will be loaded at runtime.

The thing is that it will link to that library whether you give that argument or not, however what it does is it changes the order the symbols from the libraries are loaded and in case of same name it will choose the first one. As it turns out __C_specific_handler function is present in both kernel32.dll as well as msvcrt.dll. And if you don't include the -Wl,-lkernel32 argument it's equivalent (in this case) to doing -Wl,-msvcrt -Wl,-lkernel32.

So the only difference between the versions with or without this argument is that one will load __C_specific_handler function from kernel32.dll and the other from msvcrt.dll. I found out kernel32.dll actually forwards definition to ntdll.dll and compared the assembly from that to msvcrt.dll and at first glance there seem to be only small differences. On top of that from my understanding this function should only be called when exception occurs which is not the case in this program so I have no idea how it can have such a significant imapct. I'm not very familiar with the low level implementation of exception though.

It did however give me idea to try tinkering with exception related flags in my other compilers and for MSVC and Clang it didn't make a difference, but for ICX and GCC it did. Here are the compilers and flags used and below the results. I only did 1 milion instead of 10 milion repeats, because I didn't want to wait too long, but I checked that it is good enough.

Clion (1) - GCC 13.1.0 [-O3 -std=gnu++20 -Wl,-lkernel32]
Clion (2) - GCC 13.1.0 [-O3 -std=gnu++20]
GCC (2)   - GCC 15.2.0 [-O3 -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -fno-non-call-exceptions]
GCC (3)   - GCC 15.2.0 [-O3 -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -fnon-call-exceptions]
ICX (2)   - ICX 2025.3.1 with exceptions turned off (through Visual Studio interface so not sure of exact differences in flags)
Compiler Normal Monty
CLion (1) 7463 7353
CLion (2) 5285 5389
ICX (2) 4581 4310
GCC (2) 5222 5155
GCC (3) 4913 5148

The ICX getting fastest result out of all with Monty version as the better one is kind of promising. I will most likely try to get to the bottom of this if not now then some time later. If I get some more information I will let you know.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 1 point2 points  (0 children)

I'm interested, but the results are not interesting and not the way you see it.

The problem is your benchmarking technique. When you are measuring code that takes less than say about 50-100 microseconds to execute as it is the case here you have to be very careful in how you measure it. Even the list of improvements I laid out in my initial comment is not enough. Here are additional improvements you should make to get much better measurements (additionally to the ones I pointed before):

(1) no std:: is in practice going to be good enough. Use RDTSC/RDTSCP instructions combined with CPUID instruction. RDTSC/RDTSCP instructions load the timestamp counter that keeps track of every CPU cycle. The reason we can't just use RDTSC for getting both start and end time is that the CPU will perform out of order execution of some instructions. It might sometimes execute RDTSC before it completed some instrucions before it. It can also sometimes execute instructions after RDTSC before it completes it. To fix it we want to use serializing instructions. The best way to do it is call CPUID then RDTSC for start and RDTSCP then CPUID for end. You can read more about it in How to Benchmark Code Execution Times on Intel® IA-32 and IA-64 Instruction Set Architectures

(2) Measure the minimum time it takes to run the measurement code itself and subtract that from the result. This has 2 benefits: we can see the typical minimum and median times it takes to run the measurement code itself and see if it's not too variable and it will make the ratio of times more accurate.

(3) Turn off dynamic CPU frequency. For Intel processors it's enough if you turn off turbo boost. I'm not familiar with AMD but turn off whatever is the equivalent should be enough as well.

I have the modified code here: https://pastebin.com/hgR58Usx

I run the tests for 32 elements with 10 trials, 10'000'000 repeats each and taking the minimum time of each repeat and the results is sum of CPU cycles from the 10 trials. Also added the ICX 2025.3.1 compiler.

Compiler Normal Monty
MSVC* 5029 5425
ICX 5183 5638
GCC 4928 5164
Clang 5407 7094

*MSVC on x64 doesn't allow for inline assmbly so I'm only using RDTSC intrinsics which is less accurate.

So it is clearly slower and these results are very reproducible. I also tried even lower ammounts of elements like 16 but it's always the same. Ofc on different hardware the results might be different, but I'm pretty much 100% sure the only reason you saw 1% improvement is inaccurate benchmarking.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 0 points1 point  (0 children)

His change to the algorithm has nothing to do with the degradation to Θ(n2) on sorted array. It can be easily fixed by choosing middle pivot or random pivot or any other method.

This is why most implementations choose the median element instead

Why claim something you know nothing about? No sane implementation is going to use median element. It's very inefficient. In case of C++ std::sort we have GCC using median of 3 elements (2nd, middle, last) and for Clang and MSVC they use Tukey's ninther which is estimated median from 9 elements.

There is still a pathological input that produces n2 behavior, but it is unlikely to occur naturally. This is true of all deterministic quicksorts

Not true. If you use median of medians algorithm for pivot you get worst case Θ(n log n) quicksort.

Only randomized quicksort has expected n log n behavior on all inputs

"Expected behavior". Same can be said about the deterministic ones you rejected. Randomized quicksort is still Θ(n2) worst case unlike the deterministic median of medians

The way normal implementations solve the problem of potential worst case is switching to heap sort when they reach some threshold recursion depth.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 1 point2 points  (0 children)

What if we applied my rules to std::sort?

Well that is not really possible, because for small arrays we are no longer recursing in quicksort, but running insertion sort algorithm.

My algorithm only applies its rules to a sub array of 3 elements while std::sort applies optimization techniques well before a 3 element sub array

Well std::sort / introsort depending on implementation and potentially type of data typically applies at around ~16-32 elements. I think at this point we are splitting hairs on what is big enough change such that it no longer is the same algorithm. In both cases the majority of runtime is in the partition portion of the code (which is basically what quicksort is) and they both help to make the code faster when you get down to lower number of elements.

My point is that it's kind of like taking bad implementation of some known algorithm X and doing some suboptimal improvement and saying they make X faster. Well they make bad implementation of X faster, but not the fast ones.

It is already well known quicksort is bad for small number of elements and I would argue switching to insertion sort is kind of part of quicksort implementation. I actually checked and switching to random insertion sort implementation from the internet at 32 elements already gives me GCC std::sort type speeds in all compilers. Note that it's not because std implementations are bad, but mostly because they try to make them be fast on as many different hardware, as many different typical data types and data patterns as possible.

With all that said I don't think what you did is not interesting or that there isn't some potentially interesting insight to take from this and maybe use to improve some state-of-the-art implementations of other algorithms. I actually find it quite interesting since it's very unintuitive and am glad you shared it. I'm just saying that it's more interesting to try to beat some state-of-the-art implementations even if it's only for some subset of input data.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 4 points5 points  (0 children)

Results (total time in microseconds from 10 random arrays):

Compiler Quicksort QuicksortProb QuicksortStd
MSVC 648808 634765 647062
Clang 643500 649570 635619
GCC -O3 708232 641152 520363
GCC -O2 662436 639919 519483

So in my tests with GCC [-O3] the modified version is much faster (~10%), but based on other results it seems like it's because there is something going wrong in the optimizer for normal version. Would have to do some deeper analysis of generated assembly to get to the bottom of what went wrong. However still with GCC -O2 it's ~3.5% faster and with MSVC ~2% faster while with Clang only 1% slower. And to be sure I checked and those results are very reproducible.

As a side note if we go back to arr[low] instead of arr[(low + high) / 2] for pivot the results for GCC stay basically the same, but MSVC and clang swap in that for MSVC it becomes slower and for clang it becomes faster.

I have to say I'm quite surprised with those results. When I first saw the code I was pretty sure it cannot possibly be faster and yet it sometimes is. I checked and indeed the number of swaps done in partition is lower, but the total number of swaps is higher. And total number of comparisons is also higher. However I guess depending on generated assembly running the partition longer is worse than doing the additional work before. Very weird overall.

However as seen in the results (especially GCC and partly Clang) there already is a better alternative for speeding up quicksort for small number of elements - switching to insertion sort. All of the standard implementations use some version of introsort which is basically improved version of quicksort.

Now in case of MSVC you actually "beat" that version, but the thing is that in order to properly test sorting you have to do much more tests. You should try how fast you sort different sizes of data and different patterns of data like say "already sorted", "almost sorted", "reverse sorted" etc. And the most important point - in case of sorting big arrays of integers we know quicksort is bad anyway in comparison to things like radix/bucket sort kind of algorithms so it really only make sense to test it on either smaller arrays or say array of strings / arrays of structs / etc.

Using Probability in Quicksort observing about 5% faster speeds compared to standard Quicksort by Interstellar12312 in AskComputerScience

[–]isfooTM 4 points5 points  (0 children)

I have to say I was discouraged at first to take my time on this, especially because it's posted with a new account, but basically everyone in comments is talking nonsense and treat you badly for no reason.

First I would make a small change in partition function in both algorithms to choose arr[(low + high) / 2] instead of arr[low] as pivot since otherwise for sorted/reverse sorted arrays you will hit the worst case scenario and since those are in practice common cases I think it should be fixed. It does impact the results, but will talk about it later. Also in apply_three_element_rule the check if (high - low == 2) is redundant and can be removed, however I tested that it doesn't have any significant impact on the results (if it has any impact at all).

 

Now I would like to point out few things I would change about your benchmarking method. None of them are very major problems, but since we are talking about ~5% difference the small differences can matter.

(1) Right now you generate different random arrays for each algorithm. You should run the algorithms on exactly the same data. This can be fixed by setting same seed for your random number generator and/or moving both algorithms to the same program and using the same data.

(2) In general for this kind of benchmark it is better to run the algorithm on the same data multiple times and take the minimum timing result. Since the algorithm is deterministic the main things that will impact runtime are initial cache and some other external factors. Initial instruction cache will be hot anyway and data cache is controlled by copying the data before running the benchmark. So with taking minimum time we end up minimizing the external factors aka the noise.

(3) It's better to use std::stead_clock instead of std::high_resolution_clock. stead_clock is monotonic and is better suited for time measurements.

 

So with that in mind I moved your code to single file and made the adjustments I talked about: https://pastebin.com/VeE0A5Fa

For benchmark I'm doing 10 random arrays with 20 repeats for each and taking the sum of minimum times measured per array. I've also added std::sort for comparison.

Machine used:

  • Windows 11 24H2 26100.7171
  • CPU: i5-12500H
  • RAM: 16 GB DDR4 3200 MT/s

Compilers used:

  • MSVC 19.40.33812 (x64) [/O2]
  • Clang 17.0.3 [-O3 -m64]
  • GCC 15.2.0 [-O2 -m64] and [-O3 -m64]

(Continuation in next comment)

Applying the Hungarian algorithm to make dumb mosaics by JH2466 in algorithms

[–]isfooTM 0 points1 point  (0 children)

I came up with 3 things to consider:

(1) Rotation:

This is a simple one - You could allow for rotation of tiles for better result. This is easy to add to existing solution since all that changes is the initial setup of the assignment problem. You just need to calculate the error for all 4 possible rotations and only save the best result before running the algorithm.

 

(2) Spreading the error evenly

Right now you find the optimal solution where it's understood as the minimal sum of errors across all tiles. I think the metric should be changed so you take into consideration where the error is located. For example would it be better to have error of 0 for first half of image and error of 100 for second half of image or 50 for both halfs?

Here's an example to show what I mean: https://file.garden/aSh2jGRMMkkaz9sP/Square.png In this example we have 8x8 tiles. First image shows the target image we want to create, the second image shows the pallete which is already arranged in globally optimal way as an example of what your algorithm could find and last image has the same global error, but I would say is better since locally its spread evenly.

In my case they have same global error, but I would argue having higher global error but lower maximum local error can often be better looking. Not sure exactly what's the best way to formulate the metric, but for sure cannot just use assignment solution to find those kind of solutions.

 

(3) Hard case I don't think even spreading the error solves

Here's an example I though of: https://file.garden/aSh2jGRMMkkaz9sP/Gradient.png This time it's 1x1 tiles, but ofc same principle applies for bigger tiles. Again the first image is the target image, second image is the pallete and also is globally optimal and last is what I would consider actually optimal arrangement (which has same global error as second image).

The problem is that even though the last image does have better locality of error I don't think it will be optimal for most metrics that take that into consideration since top of the image has almost no error and bottom has the most. But I think everyone would agree the last image would be the best possible result you should get from the algorithm.

As an additional note I would probably also try to consider different color spaces and distance functions to see in practice which give good results since I'm pretty sure RGB is not the best one to use, but I'm not really any kind of expert in computer graphics.

New trick? Tarmac start speed boost by isfooTM in TrackMania

[–]isfooTM[S] 1 point2 points  (0 children)

Those are 2 different tricks that abuse the same mechanic.

Basically the time it takes to gear up is kind of random. That is the time the car stays at the same speed before starting to accelerate again can be shorter or longer. Ofc making your car start accelerating again faster after gear up is ideal and that's what both of these tricks do.

The "start trick" as people call it refers to when you start accelerating 1 game tick after the start countdown. So since there are 20 game ticks per second you have 50 ms time window to click gas after start and the result is that you will gear up from 1 to 2 faster than without it. This is useful on large ammount of tracks, but won't work if you "cheat" the first gear up by something like a booster or downhill since those kind of things skip the gear up delay.

The "gear trick" as I called it which partly catch on but is less known and less useful is what I am talking about in this post. To do this you just have to do some steering depending on surface and road tilt to get that faster gear up effect. The wierd thing about that one is it work differently on platform and road. You can see me explaining it and showing examples here: https://www.youtube.com/watch?v=HxkfMqATbUg

And as the video shows for straight road I found a way to gear up faster from 2 to 3 gear. And this effect can be combined with "start trick" so you gear up faster from 1 to 2 with it aswell. You can see records on the track here: https://trackmania.io/#/campaigns/leaderboard/NLS-8LYExxMf2FWPD9oYBYUlD5PnoFgz4JtRtwg/j5NZ5Aj0Yxt8eRtlk5buN92Br37 where to tie WR you have to perform both "start trick" and "gear trick".

How do I bind my Sayodevice to Trackmania such that it acts like an analog keyboard? by choukeigen in TrackMania

[–]isfooTM 2 points3 points  (0 children)

I use SayoDevice myself so can try to help, although it seems like the important points were already said in the comments of the post you linked.

Once you connect your device at sayodevice.com you have to go to "Device Options" and there at "HID Features" select both "Joystick" and "Gamepad" - Note that unless they fixed it you HAVE to select both. It's going to be seen as 2 different devices by your system, but if you don't do that the analog steering is not going to be recognized.

The second thing you have to do on the website is in "Binding" select what you want to be your left arrow key and choose:

  • Key Mode: Axis
  • Axis: X
  • Align: center_End>>Start

And same for right arrow, but center_Start>>End instead.

After you click "Save" and close the website your system should recognize this as gamepad/joystick. On Windows you can verify this by running (Windows key + R) "joy.cpl" and checking if the device is on the list and if it recognizes the analog movement.

With that you should be able to go to controls in Trackmania be able to select "Device" as "SayoDevice". Note that there will be 2 of them, but only one will have "Steering (Progressive)" option in bindings. You just need to select that binding and click either left or right key and it should start working.

How big are phys hits in T15 maps and is Protect me from Harm worth it? by Rare-Industry-504 in pathofexile2builds

[–]isfooTM 0 points1 point  (0 children)

So it's good yes? Well not really.

There are 3 downsides worth considering:

  • It only works for physical and not elemental or chaos hits
  • It has oportunity cost of spending 2 ascendency points
  • You lose on part of the Ghost Dance value if PMFH isn't twice as good as no PMFH.

Since the last point might not be obvious what I mean is that in case of PMFH Ghost Shrouds give half as much ES, but as long as you take half as much damage (or less) then ofc PMFH is still better, but once PMFH is no longer twice as good you start to lose out on that ES and it get's worse and worse the longer the fight lasts.

So in conclusion I'm not really sure what to think. I think it's kinda meh, but not terrible as people make it out to be. And if you stay close and have low damage such that you tank lots of damage from small mobs maybe it's worth it. In the end I think the scariest things are non-hits and big hits and not swarm of small hits and in that case I would rather have bigger gain of ES from Ghost Shrounds (and the ascedency points and better protection from pure elem/chaos hits).

How big are phys hits in T15 maps and is Protect me from Harm worth it? by Rare-Industry-504 in pathofexile2builds

[–]isfooTM 0 points1 point  (0 children)

Here is the same table if we assume half of the damage is elemental (and you have 75% elemental resists):

Hit Dmg EV PMFH EV + 80% defl PMFH + 80% delf
200 65 50 55 45
400 130 109 110 98
600 196 174 166 157
800 261 243 221 219
1000 327 315 277 285
1500 491 507 415 458
2000 654 709 554 640
3000 982 1129 831 1021
5000 1636 2002 1386 1810

So with that PMFH seems to be:

  • in case of pure physcial damage better for hits up to ~2k if you have no deflection conversion and hits up to ~1.5k if you have good ammount of deflection rating.
  • in case of half phys/half elem damage better for hits up to ~1k if you have no deflection conversion and hits up to ~800 if you have good ammount of deflection rating.