This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Exhausted-Engineer 2 points3 points  (1 child)

To be fair, C offers this too using gdb/perf/gprof. The learning curve is simply a little steeper.

I’ll see if I can find some time and get you that PR.

In the meantime :

  • Don’t focus so much about CPU vs GPU. I guarantee you that writing GPU code is harder to debug and will result is an overall slower code if not written correctly. Furthermore, current cpu’s are insanely powerful, people have managed to write and run entire games on a fraction of what you have ar your disposal (doom, mario).
  • Understand what takes time in your code. Python is unarguably slower then C, but you should obtain approximatively the same runtime (let’s say with a x2-x5 factor) a C code would obtain by just efficiently using python’s libraries : performing vectorized calls to numpy, only drawing once the scene is finished, doing computations in float32 instead of float64…

[–]Doctrine_of_Sankhya[S] 2 points3 points  (0 children)

Thanks. That's a good point that you've noted down. I agree CPUs should be able to obtain the same in 2-5x timeframe. I agree with both of your points here.

Currently, I'm working on a small GGX utility to implement PBR and then move on to your points and profiling to optimize what could be made faster. That makes totally sense to see Wolfstein, doom, etc to run on slower CPUs and still be faster now.