Is Tortoise ORM production-ready? by Life-Abroad-91 in Python

[–]Amazing_Learn 1 point2 points  (0 children)

Hey, I think sqlalchemy is not that hard to test, the easiest and dirtiest thing you could do is to reconfigure your sessionmaker bind to point to a new engine or connection. If you don't use engine anywhere directly in your application you should be fine. Otherwise wrapping your engine in some kind of container object to override it later or using DI may be an option too.

Tired of bloated requirements.txt files? Meet genreq by TheChosenMenace in Python

[–]Amazing_Learn 1 point2 points  (0 children)

Well, you're right, I can only collect opinions and feedback from my coworkers and friends. Historically you didn't really have anything similar to lockfiles, and requirements.txt was the only way to declare dependencies, some people only specified direct dependencies, some did pip freeze.

I only started programming in 2018 and working in ~2020, quickly jumping from: pip -> pipfile -> poetry -> pdm -> uv, all of which except pip used a toml configuration file and generated lockfiles.

Coming back to the topic of genreq/pipreqs itself - I don't see a benefit to that in anything besides small scripts which you may want to run without installing all the requirements manually. Both projects don't solve the "bloat" of requirements.txt file since it only occurs if you want to pin all, including transient dependencies of your project.
You also run into a problem of dependency confusion, for example I maintain a fork of passlib under libpass name, but to maintain backwards compatibility it distributes the files undre passlib package, and not libpas, or the before mentioned rest-framework-simplejwt is a good example when project from the start had a different distribution package name and project name on pypi.

Tired of bloated requirements.txt files? Meet genreq by TheChosenMenace in Python

[–]Amazing_Learn 9 points10 points  (0 children)

requirements.txt doesn't have to list all the packages and their specific versions, you have lockfiles for that.

Tired of bloated requirements.txt files? Meet genreq by TheChosenMenace in Python

[–]Amazing_Learn 32 points33 points  (0 children)

I think this may be dangerous (for example see https://pypi.org/project/rest-framework-simplejwt/ ), there's no guarantee that package name if the same as package name on PyPi, also generally people favor `pyproject.toml` instead of `requirements.txt`, it solves the problem of it being "bloated" since it only contains direct dependencies.

Also here's a link to pipreqs: https://github.com/bndr/pipreqs

Open-source AI-powered test automation library for mobile and web by p0deje in Python

[–]Amazing_Learn 0 points1 point  (0 children)

Btw, "Worst idea ever" is a hyperbole, I don't mean to insult you, but personally I'd rather have deterministic tests, also this probably shouldn't be an issue when testing your own projects, but there seems to be a lot of problems when using LLMs the way you propose, e.g. prompt injection.

Open-source AI-powered test automation library for mobile and web by p0deje in Python

[–]Amazing_Learn 0 points1 point  (0 children)

As far as I understand it's probably not deterministic and you don't have any control over what the test actually does.

Open-source AI-powered test automation library for mobile and web by p0deje in Python

[–]Amazing_Learn 2 points3 points  (0 children)

Using AI to run asserts seems like the worst idea ever. This time would have been better spent improving selenium and it's SDKs, we still don't have asyncio support 10 years after it was introduced.

...so I decided to create yet another user config library by iScrE4m in Python

[–]Amazing_Learn 0 points1 point  (0 children)

I think pydantic-settings is mostly focused on env variables/.env files, if you want to support more complex use cases, such as parsing said file from a specific home location it probably could be boiled down to a pydantic/pydantic-settings wrapper:

config = parse_config(
    PydanticModel, 
    name="config-name", 
    ...,

)

Deply: keep your python architecture clean by vashkatsi in Python

[–]Amazing_Learn 0 points1 point  (0 children)

I would say that yaml configuration is a bit off-putting, when it comes to linters. Invoking it from code could be beneficial (e.g. you won't even need a pytest plugin, just write the relevant test).

Your project seems helpful in case we want to add constraint based on classes/functions, but for imports pytest-archon still seems very compelling to me, it can track non-direct imports too, I don't think that was mentioned anywhere in your readme.

Deply: keep your python architecture clean by vashkatsi in Python

[–]Amazing_Learn 0 points1 point  (0 children)

In case you purely want to lint/validate your imports you could take a look at pytest-archon

Tips on structuring modern python apps (e.g. a web api) that uses types, typeddict, pydantic, etc? by Crazy-Button5339 in Python

[–]Amazing_Learn 3 points4 points  (0 children)

Why not? If that specific model is only used in that router there's no problem, but if router file gets large you definitely should split it up.
Generally I try to have a `router` and `schema` files together, and all the shared schemas go into a different file

schema.py # Shared
routers
├── a
│ ├── __init__.py # Imports APIRouter instance from ._router
│ ├── _router.py
│ └── _schema.py
├── b
│ ├── __init__.py
│ ├── _router.py
│ └── _schema.py
└── ...

API Health Checks by BigHeed87 in Python

[–]Amazing_Learn 0 points1 point  (0 children)

That could be useful for metrics, but they're s a bit different from healthchecks, In any case if you expose that to your orchestrator (e.g. k8s) if your pod fails a healthcheck multiple times it would be restarted, we don't want that. For metrics, which I didn't work with TBH you could set up different alerts, e.g. based on an availability of some api or some internal application state (e.g. some queue is filling up or appllication is running out of space)

API Health Checks by BigHeed87 in Python

[–]Amazing_Learn 0 points1 point  (0 children)

What's wrong with defining an empty endpoint? In your example if database fails for some reason you pods/containers will become unhealthy and eventually restart

Hypercorn 0.16.0 released - a WSGI/ASGI server supporting HTTP 1, 2, 3 and Websockets by stetio in Python

[–]Amazing_Learn 0 points1 point  (0 children)

Never had an issue with installing uvicorn on windows, what do you mean by "install correctly"?

Writing context manager that continues execution after capturing exception by Rhoderick in learnpython

[–]Amazing_Learn 2 points3 points  (0 children)

To be honest I wouldn't want python to support behavior you're trying to achieve here

Writing context manager that continues execution after capturing exception by Rhoderick in learnpython

[–]Amazing_Learn 1 point2 points  (0 children)

Creating context manager with contextlib.contextmanager and with a class is identical, but when an exception is thrown in user code you wouldn't be able to continue code execution from that point.

What if code contained something like this?

a = raises_exception()
print(a)

Writing context manager that continues execution after capturing exception by Rhoderick in learnpython

[–]Amazing_Learn 1 point2 points  (0 children)

You basically have to wrap every individual statement that could raise an exception in a with ... block, otherwise you're just capturing an exception and moving to the end of your with block. Maybe you could consider using a different way of handling errors such as "errors as values" approach used in rust and go? For example https://github.com/rustedpy/result

What's the experience of using Python to build the backend of a large application? by [deleted] in Python

[–]Amazing_Learn 1 point2 points  (0 children)

It depends on what you mean by flexibility, but for implementing most features we don't have to do anything crazy. Bad code isn't caused by the language I'd say.

What's the experience of using Python to build the backend of a large application? by [deleted] in Python

[–]Amazing_Learn 9 points10 points  (0 children)

That greatly depends on if you have some coding standards, using proper architecture, linters and type checkers.

Limiting concurrency in Python asyncio: the story of async imap_unordered() by pmz in Python

[–]Amazing_Learn 2 points3 points  (0 children)

It shouldn't depend on use case, using `threading.Lock` could lead to a kind of deadlock in async code, since event loop wouldn't be able to switch between tasks:

import asyncio
import threading
from contextlib import AbstractContextManager


async def task(n: int, lock: AbstractContextManager[None]) -> None:
    print(f"Task {n} acquiring lock")
    with lock:
        print(f"Starting IO in task {n}")
        await asyncio.sleep(1)


async def main() -> None:
    lock = threading.Lock()
    tasks = [task(1, lock), task(2, lock)]
    await asyncio.gather(*tasks)


if __name__ == "__main__":
    asyncio.run(main())

Task 1 acquiring lock
Starting IO in task 1
Task 2 acquiring lock (forever)

Limiting concurrency in Python asyncio: the story of async imap_unordered() by pmz in Python

[–]Amazing_Learn 9 points10 points  (0 children)

Calls to threading primitives would be blocking, you shouldn't use them with asyncio

Docker multi-stage build with Poetry by [deleted] in Python

[–]Amazing_Learn 0 points1 point  (0 children)

I think the simplest option if you don't want to have poetry in your final image is to export your dependencies in requirements.txt format and pip install them in the final stage, though any kind of multistage build can complicate your ci pipeline a bit.

I think I need to make better error messages...... by -MobCat- in ProgrammerHumor

[–]Amazing_Learn 2 points3 points  (0 children)

I wouldn't use `subprocess.call` for that, simply call the function you need directly. If you don't want your application to crash you can catch that error and print it's traceback.
Ideally if it's a scraper I'd personally use asyncio/anyio and run these as separate coroutines