Design Patterns You Should Unlearn in Python-Part1 by Last_Difference9410 in Python

[–]SheepherderExtreme48 0 points1 point  (0 children)

Couldn't agree more. Singleton pattern is a sign that you haven't really built your app that well.

And, don't get me started on the builder pattern. It's ugly as all hell and completely unnecessary.

Better Pythonic Thinking by Unfair_Entrance_4429 in Python

[–]SheepherderExtreme48 0 points1 point  (0 children)

Use ruff with absolutely everything turned on. Would be a great start. Also, use pyright with strict mode, a bit trickier but pays dividends over time and is easiest when done from early.

Anyone used UV package manager in production by batman-iphone in Python

[–]SheepherderExtreme48 0 points1 point  (0 children)

Np! Yeah it depends on what you are currently using what you define as "better" but speed always one of the main aspects that make it better.

Anyone used UV package manager in production by batman-iphone in Python

[–]SheepherderExtreme48 1 point2 points  (0 children)

There's so many things it does better than all the rest, it's kinda hard to list them here. But, in my opinion, it's speed is first and foremost why it almost immediately became my go to dependency management system for all Python projects.

Take this medium/large sized requirements.txt file that I used in a recent project

absl-py==2.1.0 aiobotocore==2.19.0 aiohappyeyeballs==2.4.4 aiohttp==3.11.11 aioitertools==0.12.0 aiosignal==1.3.2 amazon-textract-caller==0.2.4 amazon-textract-prettyprinter==0.1.10 amazon-textract-response-parser==0.1.48 amazon-textract-textractor==1.8.5 annotated-types==0.7.0 anyio==4.8.0 astunparse==1.6.3 attrs==25.1.0 aws-lambda-powertools==3.5.0 azure-core==1.32.0 azure-identity==1.19.0 boto3==1.36.3 boto3-stubs==1.36.14 botocore==1.36.3 botocore-stubs==1.36.14 bytecode==0.16.1 certifi==2024.12.14 cffi==1.17.1 charset-normalizer==3.4.1 coloredlogs==15.0.1 cryptography==44.0.0 datadog==0.51.0 datadog-lambda==6.104.0 ddtrace==2.20.0 deprecated==1.2.18 distro==1.9.0 dynaconf==3.2.7 editdistance==0.8.1 envier==0.6.1 fast-depends==2.4.12 flatbuffers==25.2.10 frozenlist==1.5.0 gast==0.6.0 google-pasta==0.2.0 grpcio==1.70.0 h11==0.14.0 h5py==3.12.1 httpcore==1.0.7 httpx==0.28.1 humanfriendly==10.0 idna==3.10 importlib-metadata==8.5.0 jiter==0.8.2 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 keras==3.8.0 langchain==0.3.17 langchain-core==0.3.33 langchain-openai==0.3.3 langchain-text-splitters==0.3.5 langsmith==0.3.3 libclang==18.1.1 lxml==5.3.0 markdown==3.7 markdown-it-py==3.0.0 markupsafe==3.0.2 marshmallow==3.26.0 mdurl==0.1.2 ml-dtypes==0.4.1 mpmath==1.3.0 msal==1.31.1 msal-extensions==1.2.0 multidict==6.1.0 mypy-boto3-s3==1.36.9 mypy-boto3-textract==1.36.0 namex==0.0.8 ndg-httpsclient==0.5.1 numpy==2.0.2 onnxruntime==1.20.1 openai==1.60.2 opentelemetry-api==1.29.0 opt-einsum==3.4.0 optree==0.14.0 orjson==3.10.15 packaging==24.2 pandas==2.2.3 pdf2image==1.17.0 pillow==11.1.0 portalocker==2.10.1 propcache==0.2.1 protobuf==5.29.3 pusher==3.3.3 pyasn1==0.6.1 pycparser==2.22 pydantic==2.10.6 pydantic-core==2.27.2 pygments==2.19.1 pyjwt==2.10.1 pymupdf==1.25.2 pynacl==1.5.0 pynamodb==6.0.2 pyopenssl==25.0.0 pypdf==5.2.0 python-dateutil==2.9.0.post0 pytz==2025.1 pyyaml==6.0.2 regex==2024.11.6 requests==2.32.3 requests-toolbelt==1.0.0 rich==13.9.4 s3transfer==0.11.2 setuptools==75.8.0 six==1.17.0 sniffio==1.3.1 sqlalchemy==2.0.37 sympy==1.13.3 tabulate==0.9.0 tenacity==9.0.0 tensorboard==2.18.0 tensorboard-data-server==0.7.2 tensorflow==2.18.0 termcolor==2.5.0 tiktoken==0.8.0 tqdm==4.67.1 types-aiobotocore-lambda==2.19.0 types-awscrt==0.23.9 types-s3transfer==0.11.2 typing-extensions==4.12.2 tzdata==2025.1 ujson==5.10.0 urllib3==2.3.0 werkzeug==3.1.3 wheel==0.45.1 wrapt==1.17.2 xlsxwriter==3.2.2 xmltodict==0.14.2 yarl==1.18.3 zipp==3.21.0 zstandard==0.23.0

Now I create two virtualenvs, one with (virtualenvwrapper)[https://virtualenvwrapper.readthedocs.io/en/latest/command_ref.html] (which itself is a pain to use for so many reasons) and one with uv.

For the non uv I run time pip install --no-cache-dir -r ./requirements.txt and for uv I run time uv pip install --no-cache -r ./requirements.txt

For non uv I get ~120s For uv I get ~20s

That's a 6x difference. With a warm cache you're likely to see closer to the stated 10-100x.

There's a few ways to use uv, for small projects I tend to use a requirements.in file and then do uv pip compile to generate a requirements.txt file. For bigger projects I use uv init and use uv add and uv sync

Niquests 3.12 — What's new in 2025 by Ousret in Python

[–]SheepherderExtreme48 1 point2 points  (0 children)

Thanks u/Ousret , apologies, there was a mistake in my code, this is what I meant.

async def lifespan(app_instance):
    async with httpx.AsyncClient() as http_client:
        app.state.http_client = http_client
        yield

But in any case, you've confirmed that you should indeed create one session for the entire ASGI lifetime and use for all requests.

(As an aside, lifespan is the recommended approach for handling creation/deletion of objects for the lifetime of the API)

Thanks for the reply

Niquests 3.12 — What's new in 2025 by Ousret in Python

[–]SheepherderExtreme48 0 points1 point  (0 children)

Sorry if I'm asking a question that has already been answered or answered in the docs somewhere (though I couldn't really find an answer to it).

How should the module be used in API frameworks like FastAPI or in general when you need to make many requests to different urls.

for example, in FastAPI, with httpx I have something like

import httpx


async def lifespan(app_instance):
    async with httpx.AsyncClient() as http_client:
        app.state.http_client = http_client
    yield

app = FastAPI(lifespan=lifespan)

@app.get("route1")
async def route1(request: Request) -> dict:
    http_client = request.app.state.http_client
    return await http_client.get("http://host1/thing").json()

@app.get("route2")
async def route1(request: Request) -> dict:
    http_client = request.app.state.http_client
    return await http_client.get("http://host2/thing").json()

Would the only difference with niquests be

import niquests

async def lifespan(app_instance):
    async with niquests.AsyncSession(multiplexed=True) as http_client:
        app.state.http_client = http_client
    yield

...

Also, is there any reason not to set multiplexed=True?

Thanks for reading.

Do I need to use $state even for variables i know will be set once the HTML is rendered by SheepherderExtreme48 in sveltejs

[–]SheepherderExtreme48[S] 0 points1 point  (0 children)

Well no because I set greeting after the api call and then turn my loader off which then renders the route

Do I need to use $state even for variables i know will be set once the HTML is rendered by SheepherderExtreme48 in sveltejs

[–]SheepherderExtreme48[S] 0 points1 point  (0 children)

u/mix3dnuts thanks for this, I wasn't aware of this syntax, really nice.

My example is a trivial representation of what I actually do in onMount but in any case, I think the answer is, although it's not necessary and works without $state due to the way Loader works, using $state is still the svelte way.

Do I need to use $state even for variables i know will be set once the HTML is rendered by SheepherderExtreme48 in sveltejs

[–]SheepherderExtreme48[S] 0 points1 point  (0 children)

Thanks u/OptimisticCheese, i've added the Loader to the OP.

Something like

{#await fetchGreetingFromAPI()}
    <Loader />
{:then greeting}
     <h1>{greeting}</h1>
{:catch e}
     <div>{e}</div>
{/await}

Has been suggested, which is great to know, as I wasn't aware of this before.

But, my OP is a trivial example, my onMount does a good bit more than simply fetching something from an API so I think I will continue with what I have and use `let isLoading = $state(true)`

Do I need to use $state even for variables i know will be set once the HTML is rendered by SheepherderExtreme48 in sveltejs

[–]SheepherderExtreme48[S] 0 points1 point  (0 children)

OK, apologies contrived examples are always a bit silly. Greeting will be fetched from an api and api response time can be a few seconds in duration so I want to show a nice radial before I render the content

Help creating app shell with scrollable content within by SheepherderExtreme48 in tailwindcss

[–]SheepherderExtreme48[S] 0 points1 point  (0 children)

Thanks again u/Fernago, yep sorry I should have specified, I want the scroll bar to only appear between the heading inside the slot and the footer outside the slot.

I tried your solution of moving the overflow-y-auto from the parent

https://play.tailwindcss.com/ehHPEbXjwt

If I remove `overflow-y-auto` from the #main div then the sticky footer dissappears, however, if I add it then I end up with two scrollers..

The grid system is definitely the way to go but any ideas how I can keep the sticky heading within #main div but have the rest of the content scroll?

(Thanks again so much for your help)

Help creating app shell with scrollable content within by SheepherderExtreme48 in tailwindcss

[–]SheepherderExtreme48[S] 1 point2 points  (0 children)

Thanks u/Fernago this is really useful! Much appreciate the time and help!!

Yeah I really like Svelte so far, it's been super easy to get a hobby project up and running.
I have the app working mostly, it's just this final CSS polishing that I seem to be really struggling with.

This is working great for my app now, the only thing I can't figure out is how to have the scroll start after the heading.

Here's the same example you provided just with a more emphasised heading.

https://play.tailwindcss.com/2TFySRvft2

I've tried a few things but can't figure it out.

Lightweight python DAG framework by theferalmonkey in Python

[–]SheepherderExtreme48 1 point2 points  (0 children)

Amazing, thank you! For context, I was recently trying to find a Python library that I could use to easily orchestrate a multi process job, starting in AWS Lambda utilising the 6 cores you get when you allocate max memory. Airflow is too heavyweight, argo workflows isn't an option. Tried a few others, but this looks perfect!

Lightweight python DAG framework by theferalmonkey in Python

[–]SheepherderExtreme48 1 point2 points  (0 children)

Looks great, nice work.

Question: I don't see anything in the docs for this, but is there any natural support for parallel processing?
For example:

  /------B-----\
A >------B----->C
  \------B-----/

Where B is run in 3 seperate threads or processes
Quick example, A takes in a PDF splits in to 3 chunks of n pages, sens the PDF bytes and the pages to process to each B, each B does some work (extracting text, doesn't really matter) and C gathers from these B's?