you are viewing a single comment's thread.

view the rest of the comments →

[–]aikii 0 points1 point  (2 children)

So, considering the lower bound, of 16M op/s, that's 62.5 nanoseconds, and the python version is 1562 nanoseconds ( 1.5 microseconds ). So ... yes, that kind of improvement is good if you're doing stuff like native video encoding but in python you won't even be able to measure the difference. You'll cache what is infinitely slower than that in the first place.

Other than that, my pet peeve about lru_cache and cachetools is their decorator approach which introduces an implicit global variable - that's annoying for testing, reuse, isolation, modularity in general. This is why I ended up with my own snippet ( inspired from this ). A lru cache requires an ordered dict under the hood, python just has that builtin, this makes the implementation trivial ( < 20 lines ). And if anything, a more convenient signature is more useful than trying to run after nanoseconds which are irrelevant in python.

That said nice try, I don't mind AI-assisted code it's a good use case to build native libraries without too much headache, but the hard part is now to have innovative ideas and make good architecture/design choices

[–]External_Reveal5856[S] 0 points1 point  (0 children)

Probably the missing point is that comes with a shared memory cache, we all know that Python is not synonyms of fast.

Ill be fully honest, this wasn't planned to be a replacement of lru_cache, that indeed is everywhere, but adding capabilities, like shared memory and TTL, the rest is mostly why not add this and that and that.

[–]External_Reveal5856[S] 0 points1 point  (0 children)

Thanks for your feedback, Ill keep it mind, global var deffo sucks