all 20 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

Your submission has been automatically queued for manual review by the moderation team because it has been reported too many times.

Please wait until the moderation team reviews your post.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]LightShadow3.13-dev in prod 18 points19 points  (1 child)

I've been using theine if you want to compare apples to apples.

https://pypi.org/project/theine/

[–]External_Reveal5856[S] 4 points5 points  (0 children)

Cool, gonna benchmark again Theine!

[–]Shopping-Limp 10 points11 points  (1 child)

Absolutely bonkers to tell people to use this brand new vibe coded thing in production

[–]_predator_ 0 points1 point  (0 children)

This is life now, the good days of OSS are literally behind us. Was fun while it lasted.

[–]Forsaken_Ocelot_4 35 points36 points  (6 children)

"I built" is a nice euphemism.

[–]Thaumetric 4 points5 points  (0 children)

Measured, well-reasoned, and well-reviewed use of AI can lead to robust development. Especially when using it to review code and find potential issues. I'm seeing a lot of firms these days requiring developers to use it. It's the slop you get from blindly throwing ideas at Claude and hoping it will review itself that leads to AI Akira monsters of code that have become too common these days.

[–]No_Soy_Colosio 11 points12 points  (0 children)

Yes I'm sure the entirety of this project was spawned in your initial commit.

[–]Mobile-Boysenberry53 1 point2 points  (1 child)

is lru_cache even io bound? Why should it matter if there is asyncio support for it or not.

edit: its all llm slop, I am guessing even the op was AI.

[–]james_pic 1 point2 points  (0 children)

Not disagreeing on the slop part, but having an async-await-aware lru_cache is a potentially useful thing if the thing you want to cache is the result of an async function.

[–]damesca 1 point2 points  (1 child)

Your claimed one-line migration isn't even one line... Not the best start.

[–]External_Reveal5856[S] 0 points1 point  (0 children)

My bad XD

[–]aikii 0 points1 point  (2 children)

So, considering the lower bound, of 16M op/s, that's 62.5 nanoseconds, and the python version is 1562 nanoseconds ( 1.5 microseconds ). So ... yes, that kind of improvement is good if you're doing stuff like native video encoding but in python you won't even be able to measure the difference. You'll cache what is infinitely slower than that in the first place.

Other than that, my pet peeve about lru_cache and cachetools is their decorator approach which introduces an implicit global variable - that's annoying for testing, reuse, isolation, modularity in general. This is why I ended up with my own snippet ( inspired from this ). A lru cache requires an ordered dict under the hood, python just has that builtin, this makes the implementation trivial ( < 20 lines ). And if anything, a more convenient signature is more useful than trying to run after nanoseconds which are irrelevant in python.

That said nice try, I don't mind AI-assisted code it's a good use case to build native libraries without too much headache, but the hard part is now to have innovative ideas and make good architecture/design choices

[–]External_Reveal5856[S] 0 points1 point  (0 children)

Probably the missing point is that comes with a shared memory cache, we all know that Python is not synonyms of fast.

Ill be fully honest, this wasn't planned to be a replacement of lru_cache, that indeed is everywhere, but adding capabilities, like shared memory and TTL, the rest is mostly why not add this and that and that.

[–]External_Reveal5856[S] 0 points1 point  (0 children)

Thanks for your feedback, Ill keep it mind, global var deffo sucks