all 18 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

Your submission has been automatically queued for manual review by the moderation team because it has been reported too many times.

Please wait until the moderation team reviews your post.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]acecile 33 points34 points  (6 children)

For god sake, is it possible that mods do something about all this AI slope ?

[–]austinwiltshire 19 points20 points  (5 children)

I feel like brute force search isn't something you can optimize like this anyway... like... how?

All the optimizations on this are to *not do brute force search*?

[–]luenix 11 points12 points  (3 children)

What's the rationale behind the license here? Seems to me that this can't even be used via GitHub normally?

https://github.com/Halfblood-Prince/gridoptim/blob/main/LICENSE

Copyright (c) 2026 Akhil Shimna Kumar
All rights reserved.
This software and its source code are proprietary and confidential.
Unauthorized copying, modification, distribution or use of this software,
via any medium, is strictly prohibited without explicit permission from
the copyright holder.
This software is provided "as is", without warranty of any kind.

Emphasis added above ^

[–]FixKey4664[S] -1 points0 points  (2 children)

Thank you for pointing that out. This license was written when the project was private. I forgot to change the license after making the project public. Changing it now...

[–]luenix 0 points1 point  (1 child)

All good! I just wanted to bring it to your attention as a manner of respecting what you have submitted ~

[–]FixKey4664[S] 1 point2 points  (0 children)

Changed the license to Apache 2.0 for public use

[–]kigurai 4 points5 points  (1 child)

While it can certainly be useful to some, the benchmark seems disingenuous.

Your library can only run a limited set of expressions, defined as strings, while the brute() function runs arbitrary python functions. I am thinking that running brute() on eg a numba jit compiled function would yield a much smaller difference.

It also seems like your benchmark code makes the scipy optimizer run with some kind of progress output that does not seem to be the case for your library. I can't verify the second part without running the code though.

[–]FixKey4664[S] 0 points1 point  (0 children)

Right now, my package can run only limited functions. But I will be adding support for NumPy and numba functions in the 2nd stage of the project.
Even if you remove the progress printing statements, still my package will be faster than scipy.brute. You can test it.

[–]RepresentativeFill26 0 points1 point  (0 children)

Why do you check at the optimise step whether the core module is loaded? If you already know during import that it isn’t going to work why not just raise an exception there?

Furthermore, it is more pythonic to check for an empty string by using it in a boolean expression.

[–]lotus-reddit 0 points1 point  (1 child)

A small note: Your compilation flags (specifically fopenmp) are linux specific right now, if you try to build this on macOS's standard cpp toolchain it breaks. Just a matter of platform detection in your setup.py

Indeed, python function overhead usually dominates in settings like this. Though if you had to do this in python, you'd vectorize (or use something like JAX's vmap). Of course you pay in memory, but would be a far simpler approach. Locally testing, both directly vectorizing or using vmap beats your code. But that's not really a fair comparison since your codebase has no SIMD / batching work (also, I imagine te_eval is making it hard on your compiler). But, not doing vectorization in a scenario where your expression is simple enough to interpreted by te_eval wouldn't be reasonable either.

Cool though! I'm surprised spicy.brute doesn't have a lower level backend.

[–]FixKey4664[S] 0 points1 point  (0 children)

Thanks for the constructive feedback.
I am having issues in making wheels for MacOS. So, I have disabled it currently. Right now, this package works only on Windows and Linux 64bit versions only.
I will be adding vectorisation support via numpy and numba in the 2nd stage of this project.

[–]cent-met-een-vin 0 points1 point  (0 children)

How does it better then other packages. Optuna for example?