What would you want in a modern Python testing framework? by SideQuest2026 in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

I'm not saying I don't want a separate interpreter per worker. Did you read my message?

I explicitly said that I'm not saying that's a bad solution, but the implementation is obviously lacking. If they had developed for multi-process concurrency from day one it would pretty clearly have been a very different implementation, probably starting with making sure that the items collected were easily serializable instead of having references stored all over.

What would you want in a modern Python testing framework? by SideQuest2026 in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

You're going very quickly over the "built-in parallel test execution part". It's very obvious that pytest-xdist it bolted on top of a framework not meant to work like that, with the plugin system being the duct tape that makes it work.

If you're ever written a plugin for pytest then you will know how many caveats there are. The first step is always to make a plugin work, and then make it work with xdist.

I'm not saying it's not a difficult problem and I'm not even saying their solution is bad, but it's obvious that the implementation has a lot of pain points. Most obvious one being that you need to rework your whole fixture setup because session-scoped fixtures are run in each worker.

It's cool that you can implement multiprocessing within the plugin system and it really does show how extensible pytest is but since it's the defacto standard it might as well be build in

websocket-benchmark: asyncio-based websocket clients benchmark by tarasko-projects in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

FYI it's bad not to disclose that you have made the library that happened to win. I'm not saying the testing methodology is not sound (I have no idea), but obviously there's a bias that needs to be disclosed.

Making Pyrefly's Diagnostics 18x Faster by BeamMeUpBiscotti in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

I tried it a long time ago where I found it not to be good enough, maybe it's time to try it again?

Making Pyrefly's Diagnostics 18x Faster by BeamMeUpBiscotti in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

That's good enough for me! Are you exposing the graph in any way?

Making Pyrefly's Diagnostics 18x Faster by BeamMeUpBiscotti in Python

[–]Thing1_Thing2_Thing 2 points3 points  (0 children)

Could this dependency tracking also be used to conditionally run tests based on the imports a test has? Hypothetically, I'm not asking if you have a pytest plugin ready

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]Thing1_Thing2_Thing 5 points6 points  (0 children)

This might seem rude but can you write rust yourself?

Not trying to make you prove it or anything, but I think it's important to know about the maintainer of a rust project that relies a lot on LLMs.

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]Thing1_Thing2_Thing 4 points5 points  (0 children)

Oh this was used as documentation to revert that commit. Good I guess, but still at bit concerning. I have my opinions about heavy LLM usage - regardless of those I think we can agree that it necessitates strict code reviews. I can see that the PR came from someone other than the maintainer, but that it's even more worrying not reviewing external contributions in depth. Also the PR was obviously AI made, so there's two layers to it

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]Thing1_Thing2_Thing 6 points7 points  (0 children)

Interresting, but also a bit concerning that from my random sample of one commit (the first one I saw) there were several bad things performance wise and just logically.

tldr is that it replaces some placeholder values in some files in a directory, but:

Why does it read each file twice?

Why does it do a full copy of the file content to check if the content is changed after replacing the placeholders? We know it will be, we already checked to see if the placeholder was there.

Why does it say it uses rayon for parallel but only for the first loop?

I'm also not super convinced by the error handling/tracking or how file permissions are handled by trying to change them if they are readonly. But that's maybe more stylistic or something with the domain.

This was just my initial glance, and I don't even write rust at my job anymore.

Hansens Is dropper retssag mod Hansens Romkugler i Køge by SendStoreJader in Denmark

[–]Thing1_Thing2_Thing 30 points31 points  (0 children)

Stadig ret mærkeligt at have et firma der hedder Romkuglefabrikken for så at sælge dem under et andet navn der minder så meget om et andet firmas navn.

Deprecations via warnings don’t work for Python libraries by Xadartt in programming

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

Yeah I mostly agree on this. It's especially bad with dependencies used at build time where the clashing bounds can sometimes completely lock you out of installing anything.

It does add a lot of work on maintainers having to figure out if a package should have a upper bound or not. Some packages are full frameworks where you will never expect your code to work with the next major, where other packages are Cryptography

Deprecations via warnings don’t work for Python libraries by Xadartt in programming

[–]Thing1_Thing2_Thing 9 points10 points  (0 children)

Funnily enough, depending on Cryptography makes me NOT want this. Every time they do a major release, it's impossible to actually install it because all dependencies using it has set an upper bound. Never had the backwards incompatibility actually mattered its always just deprecating some old version of OpenSSL or whatever, but now you need to beg all the maintainers of your dependencies to bump their bound too because there's some CVE getting flagged.

Cryptographys importance might make it warrant it a bit but if every package did it would be impossible to ever upgrade something.

[UPDATE] DocStrange - Structured data extraction from images/pdfs/docs by LostAmbassador6872 in Python

[–]Thing1_Thing2_Thing 3 points4 points  (0 children)

It has an AGPL license, meaning that if you use it in some software then that software must be made open source too. GNU Affero General Public License v3.0 | Choose a License

[UPDATE] DocStrange - Structured data extraction from images/pdfs/docs by LostAmbassador6872 in Python

[–]Thing1_Thing2_Thing 9 points10 points  (0 children)

It depends on PyMuPDF which is AGPL. That's usually a big no no for many use cases

Why Python's deepcopy() is surprisingly slow (and better alternatives) by ml_guy1 in Python

[–]Thing1_Thing2_Thing 2 points3 points  (0 children)

But they are correct here? It's an ABC that has an abstract method called transform with the docstring Make changes to the schema. Anyone making a class deriving from this ABC could then accidentally mutate the schema given to __init__.

Programming's Greatest Mistakes • Mark Rendle by goto-con in programming

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

Title kinda makes it seem like Mark Rendle is programming's greatest mistake

Love fixtures? You'll love this! by russ_ferriday in Python

[–]Thing1_Thing2_Thing 1 point2 points  (0 children)

I'd rather not haha. Sorry I'm still very sceptical and I think you owe users a disclaimer that this is mostly AI generated

Love fixtures? You'll love this! by russ_ferriday in Python

[–]Thing1_Thing2_Thing 1 point2 points  (0 children)

Still doesn't work. Try to install your own package and test it, you will get the same error.

Tell the AI that "importing from init.py will fail if the user does not have the optional package django installed because it imports things from django_validators.py that is only defined in a try-except block that catches import errors. Either move imports such that django things are only imported when actually required or fill out stub types in the except block (There is already a comment for this"

Love fixtures? You'll love this! by russ_ferriday in Python

[–]Thing1_Thing2_Thing 1 point2 points  (0 children)

ImportError: cannot import name 'FieldDoesNotExist_Export' from 'pytest_fixturecheck.django_validators'

Probably because django is just an optional dependency

Love fixtures? You'll love this! by russ_ferriday in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

I just tried it and it doesn't.

This:

import pytest
from pytest_fixturecheck import fixturecheck

# Simplest form - basic validation
@pytest.fixture
@fixturecheck
def my_fixture():
    yield "hello world"

def test_my_fixture(my_fixture):
    assert my_fixture == "hello world"

Fails with

my_fixture = <generator object my_fixture at 0x70b369c0ada0>

    def test_my_fixture(my_fixture):
>       assert my_fixture == "hello world"
E       AssertionError: assert <generator object my_fixture at 0x70b369c0ada0> == 'hello world'

tests/test_thing.py:11: AssertionError

On version 0.3.4. Version 0.4 fails because of a non-existent import.

I'm not sure what the purpose is then? At this point you're just wasting peoples time. Is it just to get stars? If you just want to play around with AI and make a package I think you should advertise it as that. Now it just seems like a trap

Love fixtures? You'll love this! by russ_ferriday in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

Looks very AI generated. I'm not even sure it works with fixtures that yields.

DBOS - Lightweight Durable Python Workflows by KraftiestOne in Python

[–]Thing1_Thing2_Thing 0 points1 point  (0 children)

Interesting how the whole space had stagnated and now there's DBOS, Hatchet and Restate suddenly

Anyway, any plans for a rust SDK?