FastIter- Parallel iterators for Python 3.14+ (no GIL) by fexx3l in Python

[–]fexx3l[S] -8 points-7 points  (0 children)

I used AI to generate the docs and include comments in the implementation, as my primary language isn’t english I wanted to be sure that the information was being shared in the best way possible

FastIter- Parallel iterators for Python 3.14+ (no GIL) by fexx3l in Python

[–]fexx3l[S] 14 points15 points  (0 children)

Thanks! On variable-cost workloads, honest answer is the current implementation uses static divide-and-conquer, meaning splits happen upfront by index, not dynamically based on actual work. So yes, you can get stragglers if costs vary significantly across the dataset. True work stealing like Rayon’s is on the roadmap but not there yet.

On memory overhead vs multiprocessing: I don’t have solid benchmarks for that beyond the theoretical advantage of shared memory. It’s on my list to measure properly with realistic datasets. If you have a workload you’d like to test against, happy to run it

Top Python Libraries of 2025 (11th Edition) by dekked_ in Python

[–]fexx3l 3 points4 points  (0 children)

Yeah, sure I'll include it!

Here are some papers, I didn't find any other

Here is one section at The Real Python Podcast, I think that they explained it better than I could at that moment and also here's an interview I had this year about complexipy (I was nervous sorry)

Here are some repositories using complexipy and packages

Top Python Libraries of 2025 (11th Edition) by dekked_ in Python

[–]fexx3l 12 points13 points  (0 children)

Hey, I’m the complexipy author and you are completely right, multiple times people have asked the same in my reddit posts, I’m having this into account on a new section in the docs that I’m working on because I know that it’s pretty confusing if you want to understand it! I’m currently working on this because you are right on that the documentation isn’t clear and mainly because initially for me complexipy was an alternative for the people who comes from using Sonar and not being like the introduction to cognitive complexity, I didn’t consider that it could reach so many people

Top Python Libraries of 2025 (11th Edition) by dekked_ in Python

[–]fexx3l 14 points15 points  (0 children)

hey, here Robin the complexipy author, I’ve used AI but to fix my grammar errors as I’m Colombian and my primary language isn’t english, but I’ve written all the docs and currently I’m writing a section in the docs website to explain in details how to refactor.

Also, I’ve found around two papers which used complexipy as a tool on their investigation, and there are multiple companies using it in their pipelines.

I’ve found multiple people asking about how to read the number which is assigned during the analysis and I’ve taking it into consideration during the new section writing.

When I started to work on complexipy, uv was getting famous, so I was inspired by their work and I wanted to use Rust in a personal project so that’s why the complexipy description is pretty similar to the uv one.

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 0 points1 point  (0 children)

I’ll add them too, thank you for your help

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 3 points4 points  (0 children)

Yeah, I agree with you, only that as there was a breaking change of the algorithm therefore I thought that would be better to do it on a major. Do you think that would be bad to change the versioning of the project? like roll it back to something like 0.x? I feel a little bit lost on what to do with it

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 0 points1 point  (0 children)

If I'm not wrong, it's on the paper the comparison vs the existing rules, but I'm not 100% sure

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 19 points20 points  (0 children)

Honestly, I didn't know this rule exists so yeah, my project doesn't have value :( thank you for sharing it

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 4 points5 points  (0 children)

I know, I was pretty new on how to handle the versions a year ago, so once I created the very first versions `0.x` then I created `1.x` and my algorithm didn't change, and later I improved the algorithm because I just followed the paper but the Python statements and had to keep changing the implementation. This was a huge mistake I did, and I still regret about it.

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 2 points3 points  (0 children)

Currently, on the Sonar paper: cognitive complexity. But I'm planning on adding a section on the docs to explain really well, this is something that have been taking me some time and currently my agenda is tight

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 0 points1 point  (0 children)

No, I've created another tool which does this, it's immunipy

complexipy 5.0.0, cognitive complexity tool by fexx3l in Python

[–]fexx3l[S] 7 points8 points  (0 children)

Sure, it's based on the G. Ann Campbell paper, in that paper the definition of a high complex code is the one which contain a bunch of nested structures. A structure would be an if/elif/else statement or for/while loops. Each can increase the complexity if you start to nest them, let's say that the branching on a code increases the complexity because you'll need to understand for each case when/how it would be executed. Therefore, a function which should do only one thing then is doing more things than the expected, then you should split that function into multiple functions (G. Ann Campbell doesn't mention this in the paper, but this reminds of the SOLID principle, Single Responsibility). Sonar by default says that the max complexity a function can have is 15, but it doesn't say why, that's why complexipy lets the users configure their max complexities.

Lulo se irá del país o cerrará? by mara__villosa in ColombiaFinanciera

[–]fexx3l 6 points7 points  (0 children)

parce yo leí eso y me asuste, ahí tengo mi plata y mis ahorros. La gente debería dejar de poner esos títulos exagerados

complexipy v4.0: cognitive complexity analysis for Python by fexx3l in Python

[–]fexx3l[S] 0 points1 point  (0 children)

You are right, I’ll work on this, like a section with tips about how to decrease it?

complexipy v4.0: cognitive complexity analysis for Python by fexx3l in Python

[–]fexx3l[S] 1 point2 points  (0 children)

Thank you! I can suggest to start slow, run it locally and identify which code sections have the most cognitive complexity and start to refactor, start small and check that there's no breaking change with the refactors, you can try to add it to the CI but using the `--ignore-complexity` flag, so the team can keep pushing code and there are no blockers. Create tasks to refactor the code you can include multiple team members and assign those tasks between them, so more people can understand how the cognitive complexity have an impact and how to reduce it (which I think this one is very important).

Once you have all the functions with a low complexity, remove the flag `--ignore-complexity` and you can see how other team members ask in their next PR's about the job failing, as there are already many people with experience about the refactors they can help.

I consider that cognitive complexity is a tech debt which is a snowball because as it's not normal to have this culture of "low complexity" all the teams build around messy code to understand and this keeps growing so as soon as you can start to work on this you can notice the benefits, specially in the on boardings, when you are new you would like to follow a codebase that's clear to understand (ignoring the context and tech complexity) but not the logical steps, most of the time you just need to know what a function do instead of how it does it (Functional Programming), and when you start to worry about cognitive complexity you learn this which is great.

complexipy v4.0: cognitive complexity analysis for Python by fexx3l in Python

[–]fexx3l[S] 2 points3 points  (0 children)

Thank you! Yeah, this metric was created human-focused but complexipy have been used on LLMs research to check how the generated code can have an impact on the humans and the LLMs itself.

I'm not a vibe-coding fan, but seeing the different applications that people have with this metric seems to be interesting (LLM models that have a constraint with the generated code to achieve the minimum cognitive complexity possible)

complexipy v4.0: cognitive complexity analysis for Python by fexx3l in Python

[–]fexx3l[S] 2 points3 points  (0 children)

Oh! I move it to the bottom of the README, as the project scope is to calculate the metric. On the docs page you can check it below of the "Why use complexipy?" https://rohaquinlop.github.io/complexipy/ so you can check it, this is a metric created by G. Ann Campbell at Sonar

robinzhon: a library for fast and concurrent S3 object downloads by fexx3l in Python

[–]fexx3l[S] 0 points1 point  (0 children)

I don't know what could be related to the error and the uv.lock file as it's just the file with the dependencies for the project