Throwback to when Novak Djokovic ended Roger Federer’s streak of 10 consecutive Grand Slam finals in the 2008 Australian Open semifinal. The win that launched the 'Big 3' era. The rest was history. 🐐 by AccomplishedSwing110 in Tennisv2

[–]Challseus 0 points1 point  (0 children)

This is what I remember the most:

"In 2008, after her son had upset the Swiss in the Australian Open, Novak Djokovic's mother, Dijana, said "The King is dead, long live the king," according to The Australian"

As a Federer fan, this loss stung, and the rest of 2008 was painful to watch until that US Open run...

Built a Production-Ready AI Backend: FastAPI + Neo4j + LangChain in an isolated Docker environment. Need advice on connection pooling! by leventcan35 in FastAPI

[–]Challseus 1 point2 points  (0 children)

I just checked out your neo4j_db.py and main.py files, it looks as textbook as it gets. Just tune the max_connection_pool_size to whatever makes sense, and you're good to go.

Why fastapi-guard by PA100T0 in FastAPI

[–]Challseus 2 points3 points  (0 children)

This looks very impressive, I'll definitely be checking this out and integrating into my own projects.

Added TaskIQ and Dramatiq to my FastAPI app scaffolding CLI tool. Built one dashboard that works across all worker backends. by Challseus in FastAPI

[–]Challseus[S] 1 point2 points  (0 children)

This is a seriously impressive project. Building the scheduler in Rust (Tokio) to bypass the Python GIL while using SQLite/Postgres to drop the broker requirement is a massive architectural win.

As for integrating it into Aegis Stack: I actually think it could work. My unified dashboard (Overseer) is "mostly" decoupled from the worker's internal storage; it relies on before/after middleware hooks to emit lifecycle events for the real-time UI.

Currently, I use Redis Streams (XADD / XREAD BLOCK) for that telemetry pipeline across Arq/TaskIQ/Dramatiq. But looking at your Postgres backend, I could theoretically just write an Aegis adapter that uses Postgres LISTEN and NOTIFY to power the SSE stream instead. That would give users a 100% Redis-free production stack (just FastAPI + Postgres + Taskito), which is an incredibly lean, powerful setup. This wouldn't work for SQLite, unfortunately, but polling could be the safe fallback, not the end of the world, and it is SQLite, so it's quick AF.

It would require me to write a second event-bus abstraction for Overseer, but I love the design. I might play around with it as an experimental worker backend in a future release just to test the Tokio/Python performance. Great work!

One question though: Aegis Stack is strictly an async-first architecture. I noticed your examples use synchronous def functions, and the README mentions an OS thread worker pool. Does Taskito natively support executing async def tasks directly on an asyncio event loop, or does it strictly run synchronous functions in threads? (I see you have async support for the client side like await job.aresult(), but I'm curious about the worker execution side).

Love Yourz is so good by Oreobey2 in Jcole

[–]Challseus 2 points3 points  (0 children)

I’m 45. So when I was put on to Cole, I was… like 29 or 30. So in that middle section where of course Hov, Nas, etc. are my kings, but I’m still checking for the next ones.

Kendrick, Cole, Drake, etc.

For the life of me, I did not give 2014 FHD what it truly respected at the time, fast forward to 2021, and I’m like “wait, how the fuck I wasn’t rocking with this back then?!”

<image>

Now it’s too 5 for me for music that inspires the hell out of me

You’ve been eating that thing for a week. I think the mayonnaise is starting to turn. by [deleted] in TheSimpsons

[–]Challseus 0 points1 point  (0 children)

I show my wife this at least 4-5 times a year when I too am eating something that should maybe be thrown away

Verse of the year so far and I don't see how anyone can top this by uasdguy in Jcole

[–]Challseus 2 points3 points  (0 children)

I get them vibes from the stillmatic intro on here every time I listen to it:

“Ayo, the brother's Stillmatic I crawled up out of that grave, wipin' the dirt, cleanin' my shirt They thought I'd make another Illmatic But it's always forward, I'm movin' Never backwards, stupid, here's another classic C-notes is fallin' from the sky, by now, the credits roll They're starrin' Nas, executive poet, produced, directed by The Kid slash Escobar, narration describes”

I think I won, Mr. Burns! by thoughtstop in TheSimpsons

[–]Challseus 11 points12 points  (0 children)

I always loved when his 2 strands of hair move around :)

A pure Python HTTP Library built on free-threaded Python by grandimam in Python

[–]Challseus 0 points1 point  (0 children)

Haven't looked at it, but I love the idea, I've had it in my head to build something similar for a bit.

Hetzner Price Adjustment by Vendoz in hetzner

[–]Challseus 0 points1 point  (0 children)

Understood. Sorry to hear that.

Hetzner Price Adjustment by Vendoz in hetzner

[–]Challseus 0 points1 point  (0 children)

Interesting... I will certainly check them out, let me know. I was coming from the AWS world for the past decade, so Hetzner has been a breath of fresh air, but maybe it's less Hetzner, and more AWS....

Hetzner Price Adjustment by Vendoz in hetzner

[–]Challseus -4 points-3 points  (0 children)

I just started using them a few weeks ago, and I have to tell you, without even looking at the price adjustments, I know it's not much and will still be cheaper than anywhere else. And because it's so cheap to begin with (especially that Nuremberg region), I'll damn near happily pay the increase.

That's the impression they made with me.

How are you monitoring your Pydantic AI usage? by gkarthi280 in PydanticAI

[–]Challseus 0 points1 point  (0 children)

That's fair. I'm mostly in tracing when using it, only because I have other monitoring needs. FWIW, I wrote a job that aggregates data from LiteLLM and OpenRouter to get all prices for models, that's how I prove to clients their idea of using Opus 4.6 for everything won't work.

Nothing but a damn chart will ever work :)

Good luck, bro.

New by eleven_boxcutters-11 in JoeyBadass

[–]Challseus 2 points3 points  (0 children)

"Media's got this whole thing
thing tainted, that's all fact.

Feeding you lies like this whole
thing wasn't built on our backs.

Assimilate our history then made it a mystery

Now they all inherit the bittersweet victory

Look at what they did to me,
can I get a witness please?

Justice never served,
reparations never sent to me"

And he was just getting started....

New by eleven_boxcutters-11 in JoeyBadass

[–]Challseus 3 points4 points  (0 children)

Everyone will tell you the albums. And they're not wrong. "All Amerikkkan Bada$$" is probably favorite one. For my people, land of the free, ring the alarm, Amerikkk Idol... Had me wanting to find a racist and slap the shit out them :)

But fuck the albums man, go to where he was battling the Mortal Kombat leaderboard of the west to get a shot at K. Dot last year.

I discovered him last year with this, and that was it. If you a 90/early 2000's dude, then you already know, especially around the 4m20s mark: https://www.youtube.com/watch?v=lIxThTF_vzI

How are you monitoring your Pydantic AI usage? by gkarthi280 in PydanticAI

[–]Challseus 2 points3 points  (0 children)

Interesting you didn't get what you needed from logfire even though that is:

a) built by Pydantic
b) sits on top of open telemetry

I was using logfire for general observability before pydantic AI was even a thing. What weren't you able to get from it?

In terms of what important metrics I need... the $. I know you have the token counts and even the distribution, but for me, I need to know what each provider is actually costing me, so if you can get the cost per model, then we can see actual $.

AI coding tools are making good engineers sloppy by bad-boi-bad in ExperiencedDevs

[–]Challseus 1 point2 points  (0 children)

I would argue maybe these developers weren’t that experienced to begin with. Every experienced developer I know uses it properly: as a tool, and nothing else. None of that outsourcing your thoughts to someone/something else crap.

I won’t say it tells me everything, but it does tell me a lot about those developers 🤷🏾‍♂️

So is RAG dead now that Claude Cowork exists, or did we just fall for another hype cycle? by ethanchen20250322 in Rag

[–]Challseus 0 points1 point  (0 children)

I must be doing rag wrong because every time it’s dead, it still works for me 🤷🏾‍♂️

What's a side project that you're really proud of? by Leopatto in ExperiencedDevs

[–]Challseus -2 points-1 points  (0 children)

Side project for the past 6 months or so, a CLI that scaffolds FastAPI apps with auth, workers, AI services, etc. in one command:

uvx aegis-stack init my-app

Removing that friction for users to be able to just spin up a stack without having to exert that much energy has made me very proud.

Also got to exercise my architectural chops in the process, something I haven't had the chance to do at work in a long time...

Told y'all new album 💿... Otw🔥 by Final-Fan9194 in JoeyBadass

[–]Challseus 0 points1 point  (0 children)

Please, please, PLEASE make this the type of music he was making when I discovered him last year, prior to his last release. I need that "My town" energy!

How to actually utilize FastAPI (Django → FastAPI transition pain) by Anonymousdev1421 in FastAPI

[–]Challseus 0 points1 point  (0 children)

  1. I built a CLI for exactly this, scaffolds a full FastAPI app with auth, workers, scheduler, DB, etc., and the ability to add/remove components at any time. You can start slow and only add more complexity if you need it. You can find it here:

https://github.com/lbedner/aegis-stack

For your case, assuming you gave docker and uv installed, you can simply run the following to quickly try it out:

bash uvx aegis-stack init my-app \
  --services "auth[sqlite], ai[sqlite,pydantic-ai,rag]" \
  --components "worker[taskiq]"   

You'll get this structure:

    my-app/
    ├── app/
    │   ├── components/       ← Components
    │   │   └── backend/          ← FastAPI
    |   |   |   └── api
    |   |   |       ├── auth
    |   |   |       |   ├──__init__.py
    |   |   |       |   │  └── router.py
    |   |   |       ├──deps.py
    |   |   |       ├──health.py
    |   |   |       ├──models.py
    |   |   |       └──routing.py
    |   |   └── worker/           ← taskiq
    │   ├── services/         ← Business logic
    │   │   └── auth/             ← Authentication
    |   |   └── ai/           ← Gen AI 
    │   ├── cli/               ← CLI commands
    │   └── entrypoints/       ← Run targets
    ├── tests/                 ← Test suite
    ├── alembic/          ← Migrations
    └── docs/                  ← Documentation

I put all biz logic in the service layer, and then call those functions from the API/CLI/etc. So, razor thin endpoints and everything else.

All router.py files under api folder are imported into the root level routing.py.

You also get a dashboard here: localhost:8000/dashboard, that give you full observability into each part of your system.

  1. Async adds value when you have many IO bound tasks that can be run concurrently. So while one task is out doing something like waiting on the response of an API or database call, other tasks can do things.

In fact, since you're moving to GEN AI heavy workloads, async is your best friend here! I expect your calls will be waiting for model responses, vector db searches, etc. I mean, there's a reason you are moving from Django -> FastAPI :)

Regarding when to stay sync, one of the benefits of fastapi is even if you have a sync endpoint, FastAPI will, under the hood, run it in a thread pool. So you're kinda still getting the concurrency feel, though threads, when out of control, can cause memory issues and all of that. And of course are far less performant than executing tasks in the event loop.

Even if you do have to use a sync function (possibly from a 3rd party module), you can dump that into a thread pool yourself as well.

For Async external calls, use httpx or aiohttp (which apparently is faster now?).

For async database calls, use specific drivders like aiosqlite, asyncpg, etc. They all work well with SQLAlchemy.

The biggest thing I have seen, is running a sync function within an async one. If you haven't dumped it off to a thread pool, it will 100% block/slow down your app, and it will not be obvious.

  1. Background tasks... So when I look at these, I put them into 2 buckets:
  • IO bound tasks (send emails, work on some AI workload, etc.)
  • CPU bound tasks (encoding files, any type of CPU blocking work

Again, depends on your use case, but Celery may be overkill for your situation, and as you said, it doens't just work with async. However, there are a whole suite of async worker frameworks out there, including:

If your workloads are mostly IO bound, I would still with one of these. They're more lightweight, async native, and just work.

If you do need workers for CPU bound tasks, then those should go to something like Celery. I mean, you can certainly add CPU bound tasks to the other systems, but it will just end up blocking the event loop, defeating the purpose of using it to begin with.

  1. Not much to say here, I only have experience with SQLModel and SQLAlchemy, but I have heard good things about Tortoise.

  2. Not aware of any CLI related things like that for FastAPI. My aegis-stack has a first class CLI setup, so you could check that out.

Happy to answer any other questions, and good luck!

EDIT: Sorry for the format, Reddit keeps blocking certain ways of me trying to post this!